November 3, 2017
This week I learned how to use the json library to import google images into processing. I loaded json library locally, and asked it to download all google images that are tagged with the tag word “art gallery”.
Below are some of the images that processing automatically loaded into my canvas from google image search engine.
My next step is to figure out how to integrate these images into my virtual scene procedurally.
November 10, 2017
This week I continued with the twitter4j integration I couldn’t finish from last week. I used a face detection library called openCV to detect audiences’ faces. If faces are detected, I will do a time capture as usual. If not, I will display one of the “art gallery” search images that the twitter4j library returned instead.
Some issues I noticed with this approach is that the dimension of online image search is not as standard as my camera live capture, which is set at a specific resolution. So when certain internet images are displayed, part of the image became really pixelated. I actually really like the pixelated effect. But I want to take time to think either if there are deliberate glitch effects I want to introduce, or fix the issue and display a clear google image instead. Please see below for some test result with the current effect: