midterm proposal

For my midterm project, I’d like to build on my face varieties sketch by creating a robust, clearly interactive doll maker (like this one, for example). It should involve:

  • A clickable interface that lets users select the option they want on their doll (instead of a random selection with each click)
    • This should have some kind of hover/active state animations/effects, to give the user some feedback.
  • Complex/detailed shapes including:
    • Face/body shapes and skin colors
    • Hair shapes and colors
    • Eye shapes and colors
    • Nose shapes
    • Basic clothing and background options
    • These options should be able to render twice — once in the interface where the user can select it, and then once on top of the doll itself when it is selected.

I will probably draw all the shapes before building the code, to make sure they fit together properly — drawing maybe 4-7 options for each category. I will either import the shapes as images, or use beginShape to draw them in high detail. Each category will be a different class. The interface should be clean and intuitive without instructions needed, and will probably be its own class too. To get the shapes to render twice, the options that show on top of the doll may be a function within their respective classes, or their own class as a whole — regardless, this might be the most complicated part. I imagine I’ll need a lot of if statements to make sure only the selected option is shown on top of the doll, instead of each option getting added on as it’s clicked. 

[A line separator to denote the passage of time I spent writing the high level pseudocode.]

In this, I made “doll” its own class and decided I’ll try rendering the selected categories in there. We’ll see if that’s the right choice!

206,348,800+ faces

This week, I created a face generator that generates a variety of colors and/or shapes for skin, hair, eyes, nose, and mouth every time the mouse is clicked. Each category offers:

  • Skin: 49 colors
  • Hair: 47 colors and 5 shapes
  • Eyes: 56 colors and random sizes between 20 and 40px
  • Nose: 4 shapes
  • Mouth: 4 shapes

I don’t actually remember if that’s how the math works, but multiplying all those numbers got me to 206,348,800. In reality, since the random eye size is any decimal from 20 to 40, I think the number is essentially infinite.

my process

My first step was to define the colors I wanted to use. I did not want to just randomly generate RGB values — I wanted these faces to look realistic, so I looked up some skin/hair/eye color palettes online. You’ll see some funky hair colors because hair can be dyed, but no face will have green skin or red eyes (some of the skin tones do run a little too red, but I liked the way that looked — maybe chalk it up to a sunburn.) Somewhat painstakingly, I typed a list of the hexcodes I was happy with, and put them into arrays. I stored those in a separate file (colors.js) to keep the sketch would a little neater.

I then built classes for the face, eyes, and hair. To test how it all worked, I initially did only one type of hair (the big afro, since it was the easiest shape). I built everything into the constructor, which I later found from Scott was not the best way to approach this. It worked fine in just generating new colors, but I got fully stumped when figuring out how to generate new shapes for hair, nose, and mouth. I tried a lot of silly things like creating arrays with beginShape vertexes all inside quotation marks, or creating a class for each hair style and then trying to randomly generate those, but didn’t get anywhere until I got a chance to ask Scott for help during office hours. He showed me how to use if statements and assign numbers to each shape, in order to then randomly call one of those numbers when generating a new face.

Once those if statements were working, I just finetuned the shapes and standardized the variable names to tidy up the file. I added the randomized eye size to give yet another layer of variety and really liked how that turned out. My last step was going back and expanding on my pseudocode — I was writing it as I went along, but wanted to make sure it was as descriptive and clear as it could be. If I had more time, I’d make a few more hair styles and nose shapes, but overall, I’m super happy with the results.

response to How Object Orientation Made Computers a Medium

As promised, this was not an easy read — maybe easier to some than to me. I’ve always found the concept of programming languages completely mind-boggling, the same way that thinking about how humans invented linguistic tools like prepositions hurts my brain. It doesn’t fully make sense to me that with enough programming languages, you can make some binary code deep into your laptop’s core turn into a bouncing ball on a screen — yet every Wednesday evening we do something like that in class. It’s a little bonkers.

Even though I don’t fully understanding how programming languages work, this chapter helped give me a greater appreciation of their depth and complexity; it helped demystify part of the process. I struggled to understand just how OOP differed from other paradigms (or rather, how did those paradigms work exactly, beyond just calculating numbers). I tried to think of how I could code something from my p5js sketches in a different language, or how something like my Twitter feed would look like without classes of objects. From my limited understanding, OOP creates (partly or fully, I’m not exactly sure) consistency and repetition, which are hallmarks of good design; each tweet on my feed has to look the same for me to understand them as equals or part of a logical group, for example. I think this consistency is what Alt means as the ultimate cause for our understanding of computers to shift into “media” — suddenly there is structure and pattern, framing the content I see on the screen; I can interpret it as something specific, the same way I would interpret small printed text on a thin foldable rectangular piece of paper as a news article (back when I read those in print, anyway). Maybe tools become media once they can hold and render new information that can be interpreted beyond what is simply relevant to the physical object itself.

glitter party

For my dance party, I wanted some ellipses in random colors, sizes, and with random movements, all pulsating together. I didn’t quite get to the level I was hoping for, but I like the results as they are.

my process

I started on the background, creating custom functions for the glitter and the light beams. Inside the draw function, I added another “background” with a low opacity to give the effect of the light slowly fading out as it gets written over.

I modeled the rest of the code after the examples in our class videos, altering to get results close to what I wanted. The ellipses stay in place, just getting bigger and smaller — I wanted a smoother pulsating effect to the same timing as the rest of the code, but couldn’t figure out how to achieve that. I did add some randomized movement as the mouse hovers around the canvas for some extra fun. It’s based on the current and previous mouse locations, so the faster you move the cursor, the faster the ellipses dance. I played around with a few different formulas with that one, but this one pleased me the most.

Beyond achieving a smoother pulse and timing, I would have loved to figure out a good way to avoid the ellipses writing over one another — still with randomized coordinates, but if a spot’s already “taken,” the ellipse will find a different one. I looked around extensively but couldn’t find a formula beyond just pre-determining the coordinates and shuffling their order (which I did in this sketch to test it out).

response to What Exactly IS Interactivity?

Crawford initially defines interaction as “a cyclic process in which two actors alternately listen, think, and speak,” further clarifying that these actors must be “purposeful creatures.” My first thoughts were — what counts as purpose and what counts as creature? A roomba and an automatic door might interact with each other, triggering each other back and forth. In some sense, they both have “purposes,” since they were both programmed to perform certain actions, and in some other weird sense, maybe the roomba is a creature (does it have a nickname? do we assign a personality to it?). But, as Crawford points out with the refrigerator light example, it’s not a very meaningful interaction even if it’s semantically appropriate (“this kind of interaction is silly and beneath the intellectual dignity of almost everybody”).

In my mind, the way we use the word interaction is basically confined to human beings, and maybe highly intelligent animals. A puppy can interact with its owner and even train them (barking until given a treat), but my plant isn’t interacting with me when its leaves droop, even if I tell my husband “she’s begging for water.” The puppy is anticipating (or hoping for) the owner’s next action; the plant can’t know it’ll be watered. Maybe “cognizant” feels more appropriate than “purposeful,” or maybe I’m just needlessly complicating an already elegant definition. All in all, I agreed with most of his description of interaction as a continuum — a definition doesn’t have to be black and white; it just has to be useful.

shy ellipse

This week’s assignment was draw a simple shape and create an interaction that causes the shape to demonstrate an emotion. I chose to make a shy ellipse.

my process

Initially, I just wanted to make a gray ellipse with two blushing circles on its “cheeks” — I wanted these circles to be gradient, so they’d fade somewhat naturally towards the outer edge, and I also wanted the whole transparency to decrease as the mouse hovers over the gray ellipse. That way, it’s blushing as you come close to it.

After a long search, I found an example of the radial gradient I was looking for — made by owenmcateer. You have to select an outer and inner color, and the lerpColor function creates a very smooth-looking effect to transition between them. There’s a Math function there to calculate the number of these steps, but I’m not going to pretend I fully understand it because I was pretty bad at high school geometry.

So I took this function, and added it to my code. I tried setting the outer color to just be transparent, and it took way too long for it to click that it would not work. The color at the very very edge might be transparent, using some alpha setting, but then the second-to-last step would be that original color except with like .999 opacity — because the lerpColor function, as far as I can tell, cannot calculate both the number of steps between the colors as well as their opacity. So to get around that, I made the outer color match the color of the ellipse. The result is a transparent appearance, which is all I wanted anyway.

At this point, I had the gradient circles, but needed to make it so that the transparency would change as the mouse hovered close to the gray ellipse. Since I couldn’t play with the transparency of the circles themselves, I had to create a “cover up” ellipse that would disappear as the mouse hovers closer to it. Looking through the p5js references and help, I found the distance function and the setAlpha function. The distance calculates between four points — in this case, the two coordinates at the center of the gray ellipse, and the two coordinates of the cursor location — which then becomes the number that sets the alpha. That way, the closer you get to the ellipse, the lower the opacity of the cover up. In my code, I ended up dividing the distance function by 1.2, just to create more of a buffer (so the blushing starts a little bit before you actually hover over the ellipse).

This was my first screen recording when I got this to work — I was super excited about it.

I could’ve stopped there, but I decided to keep trying to add more emotion to the ellipse. So I changed the color of the base ellipse, and also made the stroke a darker pink. I had to then make a second cover up ellipse in order to mimic a stroke, otherwise it wouldn’t change colors.

Lastly, since my dad said he couldn’t tell what emotion the ellipse was feeling (he thought the pink circles were eyes, I guess), I decided to draw a face. I wanted the face to only show up when the mouse hovered (in order to stay true to the “simple shape” part of the assignment — the shape stays simple until you interact with it). I created more functions for the eyes and smile, in order to be able to use the setAlpha and dist functions once again. For the smile, I used a bezier curve that took maybe a whole hour to get just right. No screen recording this time, because this was the end result you see in the code (click here to see it again).

To say goodbye, here are two screenshots of the ellipse in different resolutions, since I used proportionate coordinates to the window height and width for everything. The first one is how I intended it to look, since it was in the code preview window, but I think it’s pretty cute either way.

response to Delusions of Dialogue: Control and Choice in Interactive Art

It’s hard to articulate, and read about, programming and art — maybe because both fields are so opposite to each other on the subjectivity scale. As Campbell writes, we tend to see computers and programs as passive tools with on and off switches; presenting “a closed set of possibilities” and “not capable of subtlety, ambiguity or question.” Art, on the other hand, seems to thrive on ambiguity and question.

This opposition creates tension, and tension generates interest. Campbell describes examples of interactive art that generate that interest by asking viewers to examine these perceived contrasts between art and code — between visible and invisible, meaningful and random, continuous and discrete, and so on — and the differences in how viewers respond to each.

The example that resonated most with me was the piece in which sensors calculate the viewer’s distance, and sound and image fade as the viewer approaches. As Campbell explains, without a slider bar, this discrete calculation seems continuous, so it removes the natural instinct “to look for a logical reason to make the correct choice.” It reminds me of all the times I adjusted my TV volume to a round or “pretty” number, trying to follow a logical reason that obviously had nothing to do with the volume itself. These instincts are also hard to articulate, being so deeply ingrained in us — which is why art and tools that help highlight them can be so fascinating.

p5js selfie

As my first Creative Coding assignment, I had to draw a selfie using p5js. It was a lengthy process but I’m pretty satisfied with how it looks! For starters, here’s the end result.

my process

My first thought was to use Photoshop actions to make my picture cartoony, then trace over it on Illustrator, and use the anchor points to create coordinates for “quads” on p5js (four-pointed polygons).

Original selfie
All my Illustrator layers

I didn’t realize until basically the end that p5js also allowed more complex shapes — so I could just use each anchor point in my Illustrator layers, instead of dividing these pretty complex shapes into quads. Oops.

The quad version was not only more time-consuming, but it produced a worse drawing, because you could see the gaps between the quads. I thought that might be fixed by making strokes, but then the really sharp points created wonky lines which was also not great. In the end, I redid all layers to be complex shapes instead of quads. Given the end result is a lot more polished, I’m glad I took the time.

Close-up of the gaps on my hair