Como Nossos Pais

Process

Hard to believe this baby’s finally done!

As I mentioned in my final proposal, I picked this song for its melody, lyrics, and structure. I didn’t delve too deeply into what it means to me, so it feels appropriate to do that now. Though I didn’t grow up hearing Como Nossos Pais a lot, I rediscovered it and fell in love with it earlier this year, back when I was still working in an office. Suddenly the lyrics clicked with me, and the bass line and vocals mesmerized me. I must’ve listened to it on repeat for the entirety of my subway ride home at least three days in a row. All that love really prepared me for this project. I must have listened to the first verse, if not the whole song, some 200 times in the last month. You would think I’d be sick of it by now, but I’m not — it still gives me goosebumps almost every time.

I decided to do an “interactive lyric video” because I’ve always really liked lyric videos, but just making an animation without interactivity would be missing out on the possibilities of p5js (since it’d be a lot simpler to just make it on video/animation software to if interactivity weren’t a factor). I also realized that the graphics might help an audience who isn’t familiar with the song or its historical context to understand, even if not fully or even consciously, some of its deeper meaning. Maybe you don’t really understand the political context of a military dictatorship declaring war on political art, but seeing handcuffs around a singer’s wrists might give you a sense of wrongness that gets the point across.

My first steps in the process were translating the song’s lyrics, deciding on the graphics, and creating timestamps. I made a spreadsheet to keep track of everything in one place, which in the beginning looked like this:

I split the verses into different lines (called “screen” on the spreadsheet) and laid them out on Illustrator, each on one artboard. I did both versions at the same time, so I could quickly see where the graphics would fit on the screen. To get the stop motion-y effect, I exported those artboards to Photoshop and used a warp grid to distort the letters ever so slightly (manually, while watching a lot of YouTube videos to keep my brain from going numb). Those warped versions became the second frame for each GIF, toggling back and forth for movement. Initially, each line would be one GIF, for a total of 16 — but since I had problems getting them to show properly in the code, I broke them up into smaller chunks (naming those with the letters on the third column, to keep track). With that separation, the total number of lyric GIFs for each version was 84. It’s a lot, but it worked — and since I’d already made the timestamps for each line, it wasn’t that much extra work (just a lot of exporting on Photoshop).

For the first code test, I had the lyrics and splash screen as functions, declaring a variable songStart which was called when the song plays, keeping track of the milliseconds; then, inside my lyrics function, another variable t would subtract songStart from the current milliseconds so that the lyrics would always play properly (whether you click play the moment the sketch starts playing, or if you wait a long time). Here’s the version of the code from that first run, with a test graphic in the beginning so I could see if the layers would work well. You may notice the GIFs look a little wonky; sometimes they only flash a frame for a split second before switching to the next frame. That’s because I didn’t take the frame duration (which was 400ms) into account when writing my timestamps; you can see up in the spreadsheet, for example, the number 15, 1025, 5021, and so on, are not divisible by 400, leaving those tiny gaps between frames. I wanted them to be smooth as butter, so I went back to the spreadsheet and rounded those timestamps to the nearest 400. On my second test, it looked perfect, so I went ahead and added the English version, copying and pasting the same function but changing some of the timestamps to account for a better flow in some of the translated lines.

For the graphics, I originally wanted to make some simply decorative and only 5 interactive (saving the latter for the most meaningful lines). To make the decorative ones more interesting, they would be animated — I was envisioning them as one continuous line being drawn on the screen. In testing my first GIF, I ran into the same problem I had with the lyrics earlier; they just wouldn’t sync well, leaving chunks of the GIF empty at random times. This was a blessing in disguise, because it made me change my approach entirely to something I think worked a lot more elegantly: all graphics had to be static until interaction, and because of that, all graphics had to be interactive, since there wasn’t a good way to differentiate between the two types anymore. I went back and rethought each graphic; the change forced me to be a lot more purposeful about what I was putting on the screen. As an example, my original thought for “Our idols are still the same” was just to have portraits of “old school cool people”, but now these portraits had to change, and in a contextually meaningful way.

With the graphics redone, I imported them using the same function logic as the lyrics. This failed pretty miserably, since the graphics just piled on top of each other instead of replacing each other. I asked Scott for help, who said this would be easier if the graphics were a class instead of a function. We tried a few things, but nothing worked until I went ahead and changed lyrics and graphics to be classes. I believe this was the version I presented in our final class, with the graphics actually embedded into the lyrics classes.

For the final adjustments, I went back to make everything its own class, including the splash screen. I added pseudocode and tidied up as much as I could. The new features I added, based on the presentation feedback, were a hotspots function, which changes the cursor around the clickable areas, and a more explicit explanation (including a clickable example) of the interactivity on the splash screen. I also added a back button so you can go back to change your selection mid-song if you want to. Those extra features went mostly smoothly, until I made the horrible mistake of naming my back button png “back”, which ended up breaking the entire code for about half an hour until I could figure out what the issue was. When I renamed it “backbutton” it worked fine — I guess p5js just wanted to give me one last heart attack before the end of the semester.

The last step, of course, was to write up this documentation, so I could link it in the final code and close the loop. I couldn’t be more pleased with the end result — it accomplished everything I wanted it to, pretty much the way I wanted to. If I could make the graphics appear on the screen in that continuous line animated format, it would be pretty great; but that would also mean redrawing them in a different style, and by now I’ve grown attached to them as they look. I’m glad I was able to devote so much time to something I truly care about, which now holds a whole new special meaning to me.

My deep thanks to my professor Scott Fitzgerald for teaching me everything I know about p5js; to my husband David Solo for cracking my back after long hours hunched over my laptop; to Brazilian artists for creating beauty despite the state of things then, now, and always.

Credits

shy ellipse controlled by photocell

Going back to my sketches to find one that would work well with a controller, I instantly thought of my shy ellipse. I hadn’t played around with the photocell sensor yet, so it also came to mind very quickly — initially I thought it’d be cute to make the ellipse become shy once I covered the sensor, as if I’m touching or tickling it and that’s why it’s blushing.

Once again, I had a bunch of technical issues. At first, I was just trying to test the photocell and LED on the breadboard (using Scott’s code), but all of a sudden my Arduino just would not connect. The USB port didn’t show up at all. I looked it up in forums, tried a different USB cable, installed and updated a bunch of software, even resorting to installing everything anew in a different laptop — nothing worked. I thought I somehow fried the board. I reached out to Scott, who figured out I had to press the reset button twice and suddenly I could see the port again. Yay!

So the code worked in that test run, but when trying to connect it to my p5js code, I had quite a few issues with the serial communication too. Nothing particularly interesting, just sometimes it wouldn’t connect (and sometimes it wouldn’t let me close the port, so I could try again). I had to quit and restart both the Arduino program and the serial control program, and also reset the breadboard, quite a few times before it finally worked. Here’s how the Arduino code looked at that point — extremely similar to Scott’s code linked above, but without the LED output, and printing the value to serial.

In updating my p5js code, I cleaned it up a bit from last time (using classes instead of functions now). Figuring out how to make the “shyness level” relate to the sensor was pretty straightforward — I just had to replace the mouse/center point distance formula in my setAlpha function with the value calculated by the sensor. Like I mentioned, I initially thought of making the ellipse shy as I covered the sensor, but I found that to be harder to control. The effect was really faint, maybe given the light level in my room — but no matter how thoroughly I covered the sensor, the ellipse was never fully gray. I realized while testing it out that my phone flashlight worked really well, so I decided to revert the idea. Here’s how that turned out:

I think this ends up working better — it’s as if the flashlight is “catching” the ellipse or something. In a dark room, getting a bright light shoved in your face might make you a little flustered too.

final proposal: como nossos pais

For my final project, I want to create an interactive “lyric video” of sorts. I chose Como Nossos Pais (“Like Our Parents”) — arguably the best Brazilian song of all time — for its beautiful melody and haunting lyrics. Its lengthy pauses give plenty of time for user interaction, and its short chorus is only sung twice, which will help prevent the piece from being repetitive.

Some features as I’m envisioning them:

  • A splash screen will briefly explain the history of the song and the ways users can interact with the piece. It will also let users pick between lyrics in the original Portuguese and an English version translated by me.
  • Lyrics will probably be imported as GIFs. I’d like them to stagger on the screen, so I figured it’d be easier to control that by making it a GIF to begin with. I also would like them to have a stop-motion-y effect to them, so I’d be alternating between two similar frames for each phrase to achieve that.
  • Graphic elements — 5 interactive, and 10 decorative. These should look different, so that the user instinctively knows what is clickable or not (maybe using different colors, thicker lines, etc). Some of them will be simple lines/shapes, while others will be a lot more complex — so I’m not sure if they should be coded or imported as PNGs/GIFs. If they’re coded, I’ll probably be using the scribble library to get the stop motion effect — but it’s a limited library, so it may be a lot of work for the more complex shapes.
  • A reset button will allow users to go back to the splash screen at any time during the song.

So far I have:

  • Finalized translated lyrics
  • Noted ideas and created timestamps for graphic elements
  • Started creating timestamps for lyrics

Once I’m done with the timestamps, I may go ahead and test just the first verse (in a single language) to make sure the lyrics, graphics, and code work as intended. After that, I’ll go back and finish compiling the lyrics and graphics, and build the code for the rest. The splash screen and reset button will probably be the last steps.

love meter

My initial idea for this was to make a circuit with two knobs and one LED, where each person would turn one knob and the difference between the two would light up the LED in a reverse relationship (meaning, if person A turns their knob to 75% and person B turns their knob to 70%, they’re pretty “in sync” and so the LED would shine brightly to indicate that).

When building the first test, I couldn’t get the second knob to light up the same LED, so I connected a second LED and ended up liking that because it kind of looked like a heart. I later realized I had a typo in my code (I wrote “potVal = analogRead(potVal)” instead of potPin inside the parentheses), so maybe my first idea would have worked anyway. But by that point I decided to keep it as it was — both because I liked the look, and because I wouldn’t have to think about how to calculate that margin of difference between both potentiometers.

the top half of a heart!

The next step was adding a switch to show both lights at the same time. This way, there’s no “cheating” by seeing how bright the first person’s light turns. I also turned one of the potentiometers upside down, to make it harder to tell which direction would turn the light brighter or lower.

The code looked like this — I used the bits from the switch as toggle again, but the lights didn’t actually turn off when I clicked the second time since I only wrote “analogWrite”. While testing, I just clicked the reset button, so I forgot to change the code until after I dismantled the circuit.

Here it is in action — in the first test, you’ll see one of the lights is a lot dimmer than the other (uh oh), while in the second test they’re closer together (phew, marriage saved). For the sake of compatibility, it wouldn’t matter whether they’re dim or bright, just that they’re similar in brightness. Maybe two dim lights would indicate a pretty chill and mellow couple.

Here is the finished product:

In a future version of this, I’d love to use the RGB LED and make it a nice, cute, romantic shade of pink. I’d have to get that margin of difference formula right, and figure out how to get the pink to stay consistent across different brightness levels (maybe I’d have to set some specific margins for input and shades of pink for output, and it wouldn’t be as seamlessly analog?). I’d also take the opportunity to fix that toggle switch so I don’t have to hit the reset button. Lastly, I’d love to use some bigger, cushier potentiometers to save my poor thumb during the dozens of tests I did. Make sure to moisturize when you’re done with those fellas.

the things we do for love

close the door!

I created a switch using digital input to let someone know whether they’ve successfully closed a door or not. It’s extremely clunky, ugly, and needs to be reset every time you open the door again — which I think makes it very funny. To start, here’s a picture of the finished product:

my process

I had a ton of technical issues trying to get this done last night. It took me about an hour and a half just to get the Arduino code working — first I was stuck in the board installation, then when I tried to cancel and start over I got some error messages, then I decided to update my OS to see if that was causing issues, then when I had everything installed I couldn’t find the USB port attachment for my laptop, then when everything was finally set, I tried to run the code some five times without realizing I had the wrong Arduino board selected. A few rage tears were shed in the process, but the relief I felt when I got that first LED light working was immense.

From then, I followed Scott’s switch as toggle code but adding another LED to provide both “good” and “bad” feedback. Here’s my code:

LEDPinG and ledGState refer to the Green light, which signifies the door is closed (good). LEDPinR and ledRState refer to the Red light, which signifies the door is open (bad). Toggling the switch alternates between the two lights — if one is off, the other is on.

At that point, I already had the board taped up to my wall, but hadn’t figured out how to make the door trigger the tiny button. Here’s me testing that the code worked (you can see I accidentally pressed it twice. It’s such a tiny button.)

Without fancier switches available, I ended up using a skewer taped to my door to trigger the button. This is the first test of that — you’ll notice I moved the switch, taping it onto the actual doorframe. I broke the skewer in half, but it was still too long — it just lightly touched the button, I think because the leverage point was too far away from the button.

In the last test, I cut the skewer in half once again to bring it closer to the board. I also added some text to give additional feedback to the user. I think the way the tape peels back has some absurdist, amazing comedic effect to it. It gives the skewer/tape combo a hint of anthropomorphism, something reminiscent of a runner’s legs giving out after crossing the finish line in a marathon. It’s very silly, and I love it.

handless switch

My switch is operated by an aluminum elbow patch that connects to my wires, also coated in foil to create a larger surface area for conduction.

Here’s the circuit — I taped the wires to my desk so they’d stay put and separate.

And here it is in action —

Dollmaker

Process

This was a bit of a doozy.

I started out by planning the maker: figuring out how many customizable features I could make, and how many selections for each. Then I got to drawing — I made all layers in the same document using Adobe Fresco, so I could check how they fit together. I then opened those into Photoshop to export each individual layer more easily. There are 155 layers in total (10 for background, 10 for skin, 5 for mouth, 5 for nose, 20 for eyes, 35 for clothes, and 70 for hair — hair combos requires 1-3 layers depending on the style).

The next part was figuring out how the interface would look. Since these were PNGs, I needed every option to be clickable — I couldn’t just show each hair style once and then have the color options below, because I didn’t know how the code would “remember” which hair style you selected once you changed the color. If I’d taken the time to code each shape instead of using PNGs, I think that would be more doable — but holy cow, what an effort that would be.

I drew the interface on Illustrator, which then helped me copy the coordinates onto p5js. My whole code has two classes: Interface and Change. Interface creates the base of what the user sees, and Change creates the hitboxes where the user can click, and the actions that take place once they click. Change gets called into the mouse and key functions to change or clear your selections.

I liked the simplicity of the structure, but it did involve a lot of typing. I used this text replacement website to help me on the long stretches of text. I also did initially separate each category into its own class (like InterfaceText, InterfaceSkin, etc) but found it to be a similar amount of lines, just copying and pasting constructors instead of x and y coordinates. I might be totally off base about that, since I did decide to consolidate early on.

My biggest issue was getting the layers to show up properly. Initially, I had the draw function end without looping — that was the only way I got the background colors to stay static, instead of disappearing as soon as you unclicked. But at that point I was also doing the background colors by writing over a new rect(), which I later switched to a PNG like the rest, so I think it was a mistake to begin with the background anyway. I got to a point where the selections stayed on, but only one layer at a time — so you’d have a disembodied nose, or a featureless body, and so on. Not great. I tried starting and stopping the loop at various different points, in basically any combination I could think of, until I decided to wait for office hours and ask. Scott managed to show me how to make it work — in the Change class, he created empty placeholder images, which are then populated by my PNGs once you click on a category. That way they stay static when the draw loop is going, and also don’t write over each other.

Some tweaks I would like to make with more time:

  • At one point I had some small interactions where the bubbles got bigger when you clicked on them. I tried to do it again once the code was all set, but it didn’t work. I didn’t have a lot of time to investigate, but I think that would be a neat feature.
  • A couple of people who tested this for me had some confusion about the hair/eye/clothing color bubbles, since they look the same as the background/skin ones but aren’t clickable. Maybe the small interaction should also exist on hover, to give you some subtle feedback of where you can and can’t click. Maybe the colors that aren’t clickable should be squares instead of circles.
  • Very small and silly thing — I should switch the mouth and nose Y coordinates on the interface. Having them inverted breaks the logical flow of facial features. No one commented on it, but now that I noticed, it’s really bugging me.
  • There are a couple of minor mistakes with the drawings depending on what you select. I won’t point them out, but they’re there. If they keep bothering me I’ll just have to go back into Photoshop and fix the files. 😛

scribbly guy

For my simple library sketch, I chose to draw a stick figure man with the p5.scribble library. The lines move at a low frame rate, and the border around him changes colors and line angles with each new frame.

my process

This one was pretty straightforward. I kept the library github open so I could reference how the elements worked. I started with the figure itself — the most complicated part here was getting the right coordinates for the curves in the hand. Once I finished the figure, I wanted to use the hachures in some way, so that’s where the border came in. I separated it into a new class to keep it tidier, and set it so the colors stay light and pastel-y. I wanted this border to move slower than the stick figure — maybe every other frame that the figure moves — but did not manage to find a good solution for that. Hopefully the colors are light enough that it’s not too frantic!

response to Art and the API

Thorp explores how we can use APIs to draw relationships between different data, creating new meanings, posing particular questions, or recontextualizing information.

Our world has so much data and our brains are fairly bad at processing much (or most) of it — words and numbers lose meaning as we’re exposed to them over and over again. As the United States nears a quarter million coronavirus deaths, each individual case feels depersonalized, abstracted. You can’t hold that many people in your thinking at the same time without losing the richness of their humanity in one way or another. We try to contextualize it with yet more numbers: “Coronavirus Has Killed More Americans than Vietnam, Korea, Iraq, Afghanistan and World War I Combined” but that can sometimes just kick the can down the road — what do those numbers mean?

With the drone strike examples, Thorp shows ways to humanize those cold hard numbers in a way that actually disrupts our normalization of them. Assigning names — especially names the reader is familiar with — to seemingly distant tragedies is an incredible effective way to bring them closer to our reality. APIs can help facilitate this process by bridging data much more quickly and efficiently than we could ever hope to bridge manually.

visualizing An API of Ice and Fire

This week, I used an API of Ice and Fire to visualize the evolution of my beloved series over the years.

Click on the picture to see it on p5js. If the text doesn’t look like the screenshot, please refresh once or twice!

The code takes the following values from each book’s API:

  • Title
  • Release year
  • Number of characters
  • Number of point of view (POV) characters
  • Number of pages

I decided to pick these because you often see commentary within the fandom that the book series has gotten “bloated” over the years. The numbers are undeniable: we have twice as many POV characters as we did in the first book, and A Feast for Crows has nearly three times as many character mentions as A Game of Thrones did. Whether that’s bloat or growth is up for debate.