Feb 11th, 2019
I would like to explore the idea of uselessness, it’s definition, and implication to what can be defined as an object, or human. I think becoming aware of the Uselessness in our presence can serve as an interface to enter a new, non-material space that is a very important part of who we are and how we view the world, our relationship with each other and ourselves.
I have curated a list of reading materials for myself, centered around philosophical arguments around capitalism, materiality, posthumanism, and nohumanism. I am in the middle of a long article called the Cybernetics Hypothesis.
This article was written by French philosophical collective Tiqqun in 1999, with the aim to “recreate the conditions for another community”. Cybernetics is a transdisciplinary approach for exploring regulatory systems—their structures, constraints, and possibilities. Norbert Wiener defined cybernetics in 1948 as “the scientific study of control and communication in the animal and the machine.” Cybernetic Hypothesis questions the design of the communication system and the conditions of control.
I would like to take on a few learning adventures in web and game. the aim is to facilitate a few useless communications. I am interested to create a web-based creature. It interacts with others through user interface. But it is also important that the creature takes on its own journey in a virtual world. I don’t want the creature’s appearance and behaviors to be entirely dictated by the user interaction. The creature can communicate with the users, but also takes on a journey towards an alternative space that belongs to itself.
I would like to explore the following four platforms/techniques/approaches:
- riot.js. a front-end web-development framework
- firebase. a back-end web-development database.
- tensorflow.js. an AI library for the web.
- Unity animation and procedural scripting.
February 18th, 2019
This week, I made progress on my first Useless Communication. I made a simple creature called Gigi. It can get hungry or listen to a joke.
March 3rd, 2019
This week, I made two creatures that could communicate to each other. When the mouse cursor hover over any creature, it speaks. The other creature responds by repeating what the first creature said, and add some arbitrary comments to follow.
I also started watching online tutorials on tensorflow.js and ml5.js, to start experimenting with AI catalyzed sentient behaviors. I also started to learn firebase, a google backend database service that allows me to store data into non-relational databases through json file format.
March 10th, 2019
This week I focused on setting up natural language processing with this resource https://ml5js.org/docs/Word2vec
I set up a quick test to use a seed sentence “my name is Gigi” to generate a paragraph of AI speech. I first used an existing model trained by ml5 based on Virginia woof’s novels. Below is the training result I got.
I then followed a training tutorial (https://ml5js.org/docs/training-lstm) to train my own models. After some initial difficulties, I managed to produce a model based on HP Lovecraft’s novel, The Thing On The Doorstep. Below is my result based on this model.
For next week, my goal is to integrate some voice recognition as interaction design, and possibly set up some backend database to store and dynamically retrieve data.
April 1st, 2019
Today, I learned how to use a speech recognition library p5.speech.js.
I made a continuous speech recognition app, which constantly listens to real-time speech through my laptop’s microphone.
I also allowed to partial, interim recognition so that the result will show up on the website faster, with less accuracy.
l published my result here:
I recorded a quick test I did just now:
For the next two weeks, I will focus on merging all three examples, the mimi pet, the ml5 machine learning, and the speech recognition app together to create a sentient virtual character that can somewhat carries on a “useless” conversation.
April 22nd, 2019
For this week, I worked on integrating LSTM test and the speech recognition test together. Firstly, I wanted to trigger LSTM to generate a few sentences based on the preprogrammed seed sentence “Please tell me about yourself” at a press of a button. Here are some images for this test.
Then, I wanted the seed sentence to be dynamically set based on speech recognition, and the reply to be automatically triggered right afterwards. Please see this screen capture for where I am now:
I think the form is mostly what I want, except that the training text is not written from a first person perspective. Since they are novels, they don’t feel like a spontaneous conversation at this point. I have not thought about this until today. So, I think this is what I will work on next.