Week 4: Rigging and Animating

Walking and running in Unreal

Following the tutorial I made a third person character that can run and kick when the 1 key is pressed. I also found out I can run and kick at the same time:

Then I gave the character the superpower of doing all these things while looking at their phone:

I first chose this run simply because I thought it was funny. I chose the kick because I don’t get to do this very often in real life. I liked this combination because the run looks really out of control and then the character pulls it together and delivers a crazy kick.

I merged the run with the touch screen animation on the upper body so that I could create a game simulating everyone walking around on their phones, but it’s better than real life because now you can run without looking up from your phone! I also liked that in this game engine context the phone replaces the standard gun.


I started to make a garden (from 3D scans I took of a garden by my apartment) that the character could run through and not look at, but I ran out of time:

WalkRun_Tutorial - Unreal Editor 11_21_2019 12_09_57 AM.png

This is my favorite shot though:

Discussion Questions

  1. Hacking the Sims so that they die feels very different (and is discussed in the article differently) than what GTA modders are doing - why is that? I remember my friend (in elementary school probably?) showing me how to take away the swimming pool ladder for example and reflecting on it now I’m trying to figure out how that fits into these other trends and behaviors.

  2. Reading about Wolfson’s piece and then the GTA modders, I was wondering - if the GTA modders want more and more realistic violence, where and when do the two intersect?

Week 3: Resizing, Re-Meshing, Rigging

This week we took our 3D scans and resized them, fixed the meshes and rigged them. I decided to do this with the scan that Jiwon did of me, the one in which I’m wearing a dress. Even though I thought the dress might be a problem, I hoped it was short enough to mostly work since the arms on my other scan looked really bad.

Following the tutorials, first I brought the obj file for my scan into Maya and resized it:

Next in Wrap3 I used the based model to re-mesh my avatar. I was curious how the dress would look after this process.

This was straightforward until I ended up with a growth on my mouth! Anna explained to me that this is the mouth bag that popped out of the mesh….I didn’t have time to start over at this point, so I cleaned it up in Maya. It’s not perfect (I think if I were to use this for a project I would re-do it), but looks a lot better:

Wrap 3.4.8 TRIAL PERIOD (2 Days Left) [ScanMorph_Pipeline_PerformativeAvatars_19_] 11_13_2019 11_10_46 PM.png
Autodesk Maya 2019.2 - Student Version_ untitled_ 11_13_2019 11_33_45 PM.png

Then I uploaded it to Mixamo:

The dress looked a little weird in between the legs in wrap3, but doesn’t look too bad in the animations. Again, I think I’d use a different scan if I were doing a project with this. And had a lot of fun playing around with the animations:

Mixamo - Google Chrome 11_13_2019 11_55_22 PM.png
Animated GIF-downsized.gif
Animated GIF-downsized.gif
Animated GIF-downsized.gif
Animated GIF-downsized.gif

There is something especially satisfying about the falling animations:

Animated GIF-downsized.gif

Reading Discussion Questions

1. The development of the personal bubble and power gesture is a good step towards addressing harassment in VR, but still puts the responsibility on the victim and requires constant alertness and vigilance by them to then trigger the “power gesture.” 

“For example, what if a player had tools on hand to change the outcome of the encounter before it ended in a negative way?  How different would our childhood memories of the schoolyard bully be if our bodies had been immovable when shoved, or we could mute their words at the push of a button?  Would the author’s experience have been any different if she could have reached out with a finger, and with a little flick, sent that player flying off the screen like an ant?”

I wonder - is there a way to put this responsibility and the consequences on perpetrators instead?

2. The article about the Sims was refreshing in that it is one of the only positive articles I’ve read about digital spaces recently. It made me wonder - is an environment like the Sims the solution? Still, who decides the language that is used in the parameters (I’m assuming most of the programmers + designers aren’t queer)? How does signaling work in this new world? Seeing the made me wonder what a digital world designed by the creators and users of the _personals_ instagram account would look like (now Lex.app).






Week 2: Ethics & Ownership

Structure Sensor

Animated GIF-downsized.gif
Animated GIF-downsized.gif

Jiwon and I tried to scan each other but didn’t have much success. Even when the scan looked OK as we were doing it, the result had holes or discolored patches on it. Jiwon and I both had done previous scans so after five attempts or so we decided to stick with those. But then when we uploaded them to SketchFab they actually looked OK!

74886460_405459297029686_5013410570368450560_n.png

It’s hard to tell what worked well and didn’t. I found it hard to work with you can’t know how well the model turned out on the ipad itself. Sometimes it seemed to do better when moving faster. It was most frustrating when the model would get unaligned from the person’s body. 


The first time I tried the structure sensor for another Shuju and I tried different poses with objects. We found a plastic crab on the floor that I perched on my shoulder. This is actually my favorite scan even though it can’t be rigged. I don’t have a pet crab or any special connection to them, but I like the dress I’m wearing. I felt a little self conscious though with the result since you can see my varicose vein in my leg - it made we wish the scan wasn’t quite so accurate. 

I also like the second one I did because I like the sweater I’m wearing. The sweater bends in weird ways when it’s rigged which I actually like. The hands look bad though because the sensor had trouble capturing them in a T pose, but I kind of like that it looks like the model is breaking or dissolving back into the digital ether. 

Screenshot 2019-11-07 09.35.26.png


Reading Discussion Questions

The readings prompted a few questions for me:

  1. Most of the readings were about ownership of entire bodies, but it made me start to think about ownership of parts of bodies. What if people start selling the right to use their body parts in digital environments. This is already happening with asset stores to a certain extent, but what if some people have such desirable virtual body assets that they start being sold for higher and higher prices? Since this is regulated for our physical bodies, should this be regulated for our virtual ones? How many parts make up a “whole” or “likeness”?

  1. I feel very resistant to the use of virtual versions of deceased actors in movies. I feel this way for several reasons, but one question I had was around the motivation for doing this. The reason mostly seems to be because of nostalgia, but what what is nostalgia like for future audiences who are seeing a majority of the actors in their movies played by virtual models of people who aren’t alive anymore. Who or what do they get to be nostalgic for? 






Week 1: Self Portrait

ZEPETO

The first platform I chose to make an avatar with is called ZEPETO, a South Korean app that allows users to make an avatar, build a room and take pictures with friends.

Screenshot 2019-10-30 23.43.35.png

One interesting feature is that it “autocreates” your avatar using the front camera. I started with this and then edited it heavily from there. I was surprised the extent to which I had control over almost every part of the face. The app lets you reposition and resize the overall face shape, nose, eyes, lips and eyebrows (which is a bit hard to do on a small screen). You can choose these parts to a certain extent - for some that are fancier/more desirable/more detailed features you have to pay using coins. I decided to buy a leaf for my hair and some cool purple glasses. You cannot change the age of the avatar, so everyone looks like a cartoon version of a teenager. You also cannot choose your body type or customize different parts of your body, but you can select or “buy” different clothes which have different body shapes associated with them.

This is an accurate representation of me to the extent that this avatar also has brown wavy hair and freckles and the same skin color. It’s a little off-putting that it looks so young - in retrospect I wonder if I should have re-made myself as a teenager. Even though I could edit the facial features I don’t think they look like mine. It’s hard not to like it because it’s so cute (and I’m kind of obsessed with the background options), but don’t feel much connection to it.

Oculus

At first I tried Adobe Fuse but became so disturbed and daunted by it I switched to Oculus. I was curious about the Oculus avatars ever since I saw the launch video for Facebook Horizon - in particular I wondered what it would be like to have an avatar without legs.

In Oculus the user can pick a face, hair, eyewear, eye color, and eyebrow color. For everything the color can be adjusted, but the shape and positioning cannot. After ZEPETO this felt like a very limited set of options. At first I thought that I had somehow selected a male category for the avatar, since most of the faces looked so masculine to me. The face options also showed a range of ages and I almost made an older version of myself.

Since you cannot manipulate the facial features, It seems that that most of the personality comes through in the clothing - there were by far the most number of options in this category. 

Even though I gave this avatar pink hair and purple lipstick and crazy glasses, I felt that this one was a better representation of me than the ZEPETO avatar. What bothered me about this avatar was that you cannot add breasts. Since the whole avatar is just a head and a chest, this was very noticeable and actually quite disturbing to “look” in a mirror and not seeing this part of my body.


Oculus versus ZEPETO

Both of these are for social platforms - in ZEPETO’s case, the whole purpose of the app is to create avatars, so it makes sense that there are a lot of customization options. For Oculus, the platform isn’t built specifically around use of the avatars, which is likely why is it more limited.

The best part of ZEPETO was the photobooth section. Shuju and I made our avatars together and then had a lot of fun “taking photos” together - which is done by selecting a photo theme and then adding a friend on the app to be a member of the photo. It actually felt similar to taking photos with friends and not knowing exactly how they are going to turn out and then laughing about it when you see the result.

I also love that if you select a group photo but you don’t have enough people to fill it, it just duplicates your avatar(s):

In terms of interactivity, ZEPETO has built in animations and uses facial tracking to change facial expressions in the AR camera. Oculus is using the headset tracking and I think is listening for sound in order to animate the mouth (when I moved my mouth without speaking nothing happened). This was a little surprising to me, but makes sense for multi-user interactions in VR.


One other huge difference was the platform I built the avatar on. For the Oculus avatar, I made it while in VR which made the experience quite different than making an avatar on my computer or phone. With creating 3D assets I’m used to be able to turn the asset around in space as I’m making it, but this was much more like getting dressed in front of a mirror since I couldn’t look at the back of my head or really see my avatar from the side. I also kept going through the mirror in VR when I tried to get close to my reflection. This did a pretty good job of evoking the feeling that this is what I looked like rather than creating something to represent me.