Tuesday, December 24, 2013

Final Mocap Progress

For my final piece of motion capture I planned out a scene where I would recreate an environment of a shop window and a collection of mannequins in that window. One would come alive and be confused as to what happened and why they can't get out. I captured this performance using a two camera setup as there was more movement than my previous tests. The data that I obtained from this was very messy and involved a lot of cleanup, to a point where it probably would have been just as quick or even quicker for myself to keyframe the animation. What I have managed to clean up is shown below. That is the stage where I've got to at the minute, I've had to adapt a lot of the motion as it was all in the wrong position and over rotating in some joints. The movements are quite jittery in places and very snappy, I've tried adjusting this and it has reduced the amount, however it's not to the standard as I would have liked. I want to try and put some more personality into the movements and adding that appeal, I'll try this out using animation layers so that it's not destructive on the base motion and I can always delete it if it's not working out.

I've been using markerless motion capture, a technology that is fairly recent and due to that fact it doesn't seem to be very accurate when compared to marker motion capture such as Vicon. This is one reason it's taken myself a lot longer to get it to a workable stage. I feel like I could animate it a lot better and with more ease if it was from scratch, and right now I do not feel comfortable with what I'm achieving. I don't know if I will be able to get the motions smoothed out like I would with keyframe animation and it's frustrating me that I can't figure it out.

Scared Animation / MoCap

Another one of my tests with motion capture was to create a reaction to something, in this instance I chose to capture a scared/shocked emotion. I did this with a one camera setup as there wasn't any movement that would go behind the body and therefore not be captured by the camera. I tried two different ways of retargeting this time, one within Mocap Studios and my original way within Maya. The two videos below show the original mocap that was done within Mocap Studios, and the one below in Maya. What was interesting to see was that when retargeted using the iPi software, the original data came out quite well and accurate, more so than doing it in Maya. The only issue with this process is that only fbx models could be used to import and I was not sure how to export one of my own models with the rig out as an fbx and into Mocap Studios. In Mocap Studios, you have to connect each joint from the data skeleton to the imported character joints, and I wasn't able to do this with my own model. In order to get more practice with cleaning up the data and using my own model I think I'm going to stick with retargeting in Maya.





As with my previous mocap test I also created a keyframe version. This time I keyframed using my own reference and from this I also created a more exaggerated animation ontop of the base animation. I did this using animation layers. Because I've used the Stewart rig from Animation Mentor, there is immediately more appeal in terms design. In the exaggerated animation there is more of a reaction and it comes across more vividly than the base animation or mocap version. To make it more appealing I would need to concentrate more on the line of action of the body and the arcs that are being created. These arcs are a natural motion and add appeal to movements. It's easier to create these with keyframe as you are starting from scratch, I'll try and implement these techniques into my motion capture to see how much I can edit the motion yet keeping the main performance at the core.



Tuesday, December 17, 2013

Two Camera Setup - iPiSoft

For the past few tests that I have undertaken, I have just used a single Kinect camera. This is fine for recording performances from the front that will not really have any actions that will disappear from the cameras view. I decided to try out a two Kinect setup, where each Kinect is placed at around 60 degrees between each other. Because there are two cameras, a calibration process needed to take place in order to determine where both cameras would sit in 3D space. I held up a piece of cardboard and moved this from one camera to another whilst stood in the same place. After this I was then free to record my performance and take the data into mocap studios. The calibration video needed to be opened up into mocap studios and for it to calibrate based on 3D plane. By saving this file it would then use this as a reference when opening up the main motion capture data file.


Below are two videos showing the raw data and the cleanup. I found that with two cameras it had stabilised the feet more, however it didn't capture the movement of the left arm very well. This could have been down to the calibration setup and I will need to check this before I go on to creating my final animation. Even the raw data is quite accurate, more than what a single camera setup would give me. For the cleanup I had to put in the movements for the left arm myself as this was not captured, I also corrected the feet and spine. The rigged model I've been using is my own that I managed to fix from my previous attempt. It still needs some more adjusting but it's given me a decent result this time round.



Monday, December 16, 2013

Clumsy Ninja

I recently downloaded a game for iOS called Clumsy Ninja, I was initially intrigued about the style and how the animations would look. After playing it for a little bit I was really impressed with the overall aesthetic style and the animations, I decided to look into how the animation was created and found out that it is actually AI based. 
"Normally, game designers have to create painstaking animations that model every possible kind of behavior. They predefine what a game character will do based on certain inputs. But with Clumsy Ninja, NaturalMotion’s designers don’t do that. Rather, they create the character from the bones up. They do that just once. They imbue the body with physics, based on the Euphoria engine. So the arms will move like arms and limbs will behave in a realistic fashion. They marry that to artificial intelligence, which tells the character what to do in a given situation. Then they essentially let the character loose in a game world and see what happens. 
With canned animations, all you can do is play back an animation in response to something the gamer does. Clumsy Ninja can generate procedural, or on-the-fly, animations, based on actions taken by the A.I."
The standard of the animation is something you could see within short animations or even feature length films. There are some places where you can tell that it's based on physics simulation, however the majority of it I was very impressed by. It's interesting to see that this high standard can be achieved with AI and physics and that each animation is calculated in realtime. Seeing how far this can go in the near future will be great to see and what it can be expanded to from just a mobile game. As great as it is to see this technology, the animator inside of me is a bit sad that this wasn't keyframed. With the type of interaction that it entails, I can understand why it made sense to go for more physics an AI, but it would have been great to see even small sections or movements that were keyframed.