(Music) I just want to preface this video, by quickly explaining, why we're looking to use OpenCV? In this course, you've learned so far how to drag and drop images or zip files to classify images or create custom classifiers. We taught it this way, so you could understand very quickly and easily, how IBM visual recognition works. To do this at scale, we need to replicate what we have done in a programming language such as Python. The reason why we are using Python is because Python is very user-friendly, so it is very easy to pick up. Second, is because Python has an expansive set of libraries, especially for image processing. So instead of reinventing the wheel, we can use a package in one of Python's libraries that have been built by someone else to help us in our task. So it is crucial to understand the basics of OpenCV, as not only will help us in using IBM Watson visual recognition in Python, it will open doors for working with computer vision, in a broader scope as well. In this video, I'll cover what is OpenCV? Followed by various examples of some of the basic tasks that you can do with this package. In the lab exercises, you will get a chance to explore some of these examples with OpenCV in Python. What is OpenCV? OpenCV is short for Open Source Computer Vision Library, and it is a package that developers can use in Python, C++, or java. It can be used to process static images like photos, offline videos, or streaming videos from say, your webcam or a camera attached to a raspberry pi as well. Because it's a very popular computer vision package, and is capable of performing basic image processing to relatively complex image processing, OpenCV is often used together with more advanced packages like, TensorFlow or PyTorch for deep learning. In the lab exercises, you'll be using OpenCV in Python, and the package in Python is called cv2. Just note that, regardless of what version the Python package has become, whether it's 3.0 or 4.0 or 5.0, the package is still called cv2 in Python. So let's go through some of the things you can expect to do with OpenCV, starting from the basics. You can crop, adjust, or rotate images. You can also transform images that are trapezoid in shape to become square, like this Sudoku puzzle. You can also denoise images in OpenCV, with many different methods to do so. If you're interested in looking at the lines and edges in an image, I would recommend taking a look at Canny Edge Detection, which is an algorithm developed by John F Canny in 1986. Next, you can do something called Color Quantization, which can dramatically reduce the image by reducing the number of unique colors found in the image, through a statistical technique called k-means clustering in OpenCV, number of colors, from say 1000 unique colors, including different shades of blue and yellow into just two colors, four colors, eight colors, or essentially any k number of colors. If you're looking to isolate objects in the foreground of a video given a static background, you can use a technique called background subtraction. This method is sometimes used for counting people or cars, but really depends on the camera being static. Things that don't move or change from frame to frame become detected as the background, and gets subtracted out. Have you ever taken photos like these, where it's either much too dark, or much too bright? In OpenCV, there's a way to fix this. But before I explain how, it's important to understand why this might be important. For example, if you're building an app to recognize guest faces at the lobby of the hotel, then you want to be confident that your images aren't too dark or too bright, depending on the lighting conditions or it may have trouble classifying faces. To rectify underexposed or overexposed images, OpenCV uses histograms, to determine if an image might be too bright or too dark. The histogram takes all of the pixels in the image, and counts them on a scale of 0-255, with zero being completely black pixels to 255 being completely white pixels. You can see in this image, there are a lot of darker pixels represented by the greenery and rocks with wider pixels. So if you were to see this histogram without even knowing what the image looks like, would you guess that it is underexposed or overexposed? Well, given that most of the pixels have a value closer to zero or a completely black pixel, it means that the image is likely underexposed. Conversely, if you see most of the pixels around the white pixel value of 255, then the image is most likely overexposed. Of course, sometimes you could even run into histograms that have two peaks that are very dark and very light. Well in that case, you might surmise that it would be an image of something like a zebra. You may also run into images like this one, which seem washed out or faded. If you look at the histogram, it becomes clear why. All of the pixels huddle around a small distribution of brightness. There isn't a lot of really dark pixels or really bright pixels. With OpenCV, based on these histograms and using a statistical transformation to redistribute the pixels into a full distribution of black, to gray, to white pixels, you can take a poorly exposed and poorly contrasted image and convert it into a more ideal exposure and contrasts. We've talked primarily about image processing and transformations. In the lab exercises, we will cover just some examples of what you can do with OpenCV. But there's a lot more that you can do with OpenCV that you can continue to explore. Thank you for watching this video. (Music)