So, we're here again and we're just going to
go to the DragonBoard and the terminal and walk
through that two code files that we use to create our Emotion Booth Project.
So, we are here in the project directory again, and here are the four,
same four files before,
and we just open the main file emotion_booth.
So, the first couple of lines are just imports.
So boto3 is the Amazon web service SDK.
And then, cv2 is open CV, numpy is numpy,
it's a sigMatrix in like manipulations for let's say the images,
and then, Json for like the Json response because that's how it's stored.
And then, import time, and import serial.
So, serial is necessary for communicating with arduino
because that's the channel or the method of communication we chose.
So, just a simple print statement since we started,
and here's the BAUD_RATE and the PORT.
So the BAUD_RATE, just like whatever speed
or frequency that arduino has been listening on and the PORT is whatever,
how are we to know it's connected to the DragonBoard.
So, with the mezzanine,
it's connected it over /dev/tty/96B0.
And then, we just initialize the Arduino object right to it.
Here's just a simple class.
We call it Arm, just for like Servo as an arm.
But in this case, we just use it for one Servo and we initialize that object.
Basically, it takes in an angle.
They can write and it takes in an angle and
just sends a message to Arduino here, over serial,
and we are going to use this method called Update and to actually right start,
to tell the, or do we know which server to move.
In this case, it's it's just one Servo.
The next part, a lot of this is, like Initializations.
We initialize the connection to 's3', which is again,
if you don't know is basically
a simple hard drive on Amazon that you can use to upload files and store stuff.
And here is the connection to recognition,
which is the image,
machine learning like computer vision service that they have.
And here we initialize our webcam and just set it up.
Let it feed for a bit so that it can initialize a good picture.
And here, we actually want to capture the image from the webcam.
In this case, we capture it twice because for some reason,
our webcam was having a problem where it always gave back the frame it captured before.
So in this case, we read the the old frame and then,we read the new frame.
And then, the new frame is the one we want. It's actually the current frame.
For your case, for example,
I ran this on my computer and it was fine.
It just captured the current frame which is just using one line.
For your case, depending on if it captures the old frame and the new frame.
You can comment on one of these lines and it should work the same.
In here, we're going to use the cameras because we don't do it
anymore and this part we choose what bucket we need.
So, replace this bucket name with whatever bucket name you chose to create.
This is the image name that we're saving it locally,
and this is the image path that we're going to save it on Amazon S3.
So in this case, we save it in the emotion_booth directory and then,
save it as face.jpg.
And here, we actually save the image locally in the same directory.
And then next, we open that same image using open and as data,
and then, we use the S3 client that we initialized before and upload it to our bucket.
So, here's the image file.
Here's the bucket we chose and choose what we're going to save it as.
And then, the next part is actually using Amazon recognition.
So, here we use client.detect_faces.
So, that's one of the methods.
It's just detect_faces and we have to specify
an image and the way we can do that is we specify S3object.
This is going to be the same image that we just uploaded.
So how to decide what bucket it's in and how do you get to that image.
And then here, the attributes.
In this case, we want all the attributes,
you can choose to get less,
depending on what the specific options for this method are.
So, one thing with the response to know is that,
it will detect it for every face in the image.
For example, if me and Andrew are both in the camera,
it would check two faces.
So, it gives a list or an array of face details.
Inside face details is a bunch of Json
that tells you information about the image or information about that face.
For example, it can tell if you have a beard or if you are wearing glasses,
and even emotions as you can see here.
So, you want to access face details
and access the first entry because it's only going to be
one face when we run it and we want to access the emotions, information that we get.
Now, inside Emotions, it is also
another array that contains different emotions and confidence levels.
But the first entry is always the emotion with the highest probability or confidence.
So, what it thinks is best.
So, we just get the top emotion using the first entry and then,
when we get that emotion, we just keep it.
But then, we also want the type,
the type of emotion to see what it is.
So in this case, if it was happy,
it would just give me happy string and we want to see how confident it is.
In this case, we don't use the confidence for anything.
But you could print it out and just give feedback on like
how much it thinks that you're happy or sad or have like a special day.
If it's not this confident,
then just give an error or redo the image.
Here's actually where we tell the Arduino what direction to move.
So if it's happy,
we want to print that it's happy and then we want to update
the Arduino Servo and set it to a 45-degree.
So 45 degrees is right on in this instance and if it's sad,
make it 135 degrees and that's left on this instance.
And then, 90 degrees is right in the middle between that and they'll will be upright,
and the last part is just for like niceness.
We want to show the image to ourselves to see what it looks like.
And finally, this cv2.waitkey
allows us to see the image until we want to close it by just pressing any key.
And the last part is just closing all the windows once that's done. So, yeah.
That was basically the neat part of how all of this kind of work.
It's pretty simple file.
You can edit and take this and change it to whatever you wanted to use.
That's the most part. Well, real quick,
we're going to jump into Terminal again and then,
go over the Arduino file.
It's going to be pretty short and Andrew will take care of that.
So, let's exit out of this Vim file, not Vim file, the Python file,
access through them and then let's go to Servo_control,
and this file is a lot easier than the one before.
But basically, we want to initialize the PIN over here.
As I said in the previous video,
the Servo connected on d3 so the PIN numbers is 3.
And then, we initialize Servo as Servo_0.
So in the beginning,
we want to initialize the Servos so that it listens on PIN 3 and you also want to
initialize serial so that
the python file knows where it needs to send to and the Arduino file and return,
listens on that same exact port.
And here's the void loop part,
and this is the part where it actually reads from serial what string to output.
So, the first part is actually listening to it.
First, you want to check if the serial is available and if it isn't,
then you just break out of the loop.
If it is, well,
you don't break out of the loop but it will
like continuously loop but it won't do anything.
But if it is there,
then it would read from serial and then it would then check the angle of the string.
So, it converts the string to an integer and then afterwards,
it will write the Servo to that angle and that's pretty much it.
Yeah. So, that was all the files.
We just did this simple project.
You could integrate other sensors from the sensors mezzanine
and just made this a lot more complicated and fit your needs,
or make it more interesting for you.
In this case, we just made a simple thing to move to Servo
and interact with the real world a little bit. So, yeah.
Be sure to buildup this project. Download the code.
Play around with it and see what you can build using it. See you on the next videos.