chromsan-Book

The Extended Encyclopedia of Philosophy- A collection of twelve new -isms. A sample chapter can be viewed here.

For this project, I wanted to create some sort of an encyclopedia of new terms. I was originally thinking about doing made up wars, but in the end shifted to creating new philosophies.

Since I was making an encyclopedia, it was fitting that I source the text from Wikipedia. This has a few advantages. Firstly, Wikipedia has a nice API that makes searching for pages and getting the text of any page very easy in javascript. Second, I could get any page Wiki has to offer, which means that I could create many interesting combinations. Finally, all text on Wikipedia is explicitly free to be remixed.

I began by collecting a list of all the -isms that Wikipedia has articles on. There's a nice list here. When you request a page, Wiki provides the entire HTML structure that has not been parsed, so javascript is a must for this task. I wrote a short script using p5 to do this. Now, I had a json file with all the -isms Wiki has to offer, structured to include the urls from the hyperlinks on the page, so I could easily get each article with one API call.

I then started to make the made up philosophies. Some are created by mashing two together: dropping the -ism on the first and replacing it with -ist. Others are created by adding a latin prefix from this list (that I parsed into a json file in the same manner as before) to the first term. This yields an interesting list of new philosophies like:

I then collected the pages for each of the original philosophies used and parsed them to get just the first couple paragraphs that summarize the terms. These were then mashed together to get around 4-6 sentences of description of the new term. I did a little work to replace mentions of the original terms with the new term to make it feel a bit more natural. This entire process was done in another p5 script. It results in descriptions such as:

I wasn't a huge fan of the fact that the original philosophers were kept in the descriptions of the new terms, as it roots them a bit too much in the two original philosophies. So, I wrote a short script in python to mix up the names and places a bit. For this, I needed a way to identify individuals like Karl Popper in the example above. There is a really nice package built on NLTK that I've worked with before that does just that.  This created text like:

Finally, I typeset the pages using basil.js in InDesign. I'd never worked with basil.js before and really liked the amount of freedom it affords you.

The resulting text can have some interesting combinations of philosophies. Often times there are some clear contradictions between the different elements of the terms, which makes it even better. I do think, however, that the text part is a bit too long, especially when there are 12 pages of it; I probably should have made each summary about 3-4 sentences. There was also more work to do in ensuring the text was well formatted with appropriate spaces and periods, as this was messed up when I did some of the parsing. It's readable, but not perfect in this regard. I'm otherwise happy with the result.

A sample chapter can be viewed here.

The full set of PDFs can be downloaded here.

All the code for this project can be found here.

lass-Book

link to zipped file

Discusses ailments found in    plants, animals, and computers.

 

For this project, I wanted to create a book that contained made-up diseases. My goal was to combine information about diseases with computer errors to create diseases that a robot might encounter. The actual project ended up straying from this quite a bit.

At first, I was really interested in using recurrent neural networks to generate text. I followed this tutorial for ml5's LSTMGenerator, but I started this way too late and didn't have the time/knowledge to train a model to my liking. This is the state I got to before giving up:

Even though I didn't follow through with this approach, I think that I would like to learn more about LSTM in the future. I really liked a lot of the examples I saw that used this method!

I reused my training text with RiMarkov to generate my text. The text included these three books and some system error codes . This was actually very entertaining. I spent a good amount of time clicking through and enjoying the sentences it created.

For the illustrations, I used makehuman to generate some random models, and blender to mess them up. This was actually my favorite part of the project. Here is my personal favorite illustration:

Overall, I would say I had a lot of fun with the process, but I'm not too sure if I like the final product. I think that there are a couple of pages that are very good, but a lot of it is just confusing!

harsh-Book

Title: Make Limericks Not Hate
Description: A collection of Limericks made out of the President's tweets.

link

Narrative

 

Needless to say, this project was super fun to work on (though frustrating at times). Here's a short description of the steps I took to make it:

  1. Collect a JSON with tweets with certain keywords from : http://www.trumptwitterarchive.com/
  2. Parse those keywords with Rita.js and break the sentences down into 8-ish syllable long strings with the keyword at the end, eg "bad" or "fake news".
  3. Key Words: [fake news, wall, bad, winning, loser, stupid]
  4. Come up with the starting lines for the limericks, because a limerick has the structure of AABBA, I chose to come up with some defaults for the first A, first B and last A, and then inject the parsed sentences into the other remaining A and B, so there would be some continuity and narrative in the limericks
  5. Program in Rita to construct the limerick and export as a JSON with individual limericks, I essentially mixed and matched my keywords between the A's and B's - so you could find 'news' as A and 'bad' as B and vice versa
  6. Program a particle system using twitter icon in Basil.js - this would serve as background for the chapter
  7. Randomly mix and match limericks and place into the document in Basil.
  8. Huzzah!

Overall, I'm happy with the results I got - in another iteration of this project I'd focus on parsing the tweets in a smarter manner, possibly using sentiment analysis or something else to understand the meaning in the tweet.

Code - Rita.js

var wall;
var stupid; 
var sad;
var bad;
var news ;
var loser;
 
var news_st_end = ['There was once a man who was known to accuse', 'It just seemed like he was confused'];
var news_end = ["Often he'd overuse"];
 
var bad_st_end = ["There was once a man who was very mad","But now we know he was just a fad"];
var bad_end = ["He was often mad"];
 
var wall_st_end = ["There was once a man who would often bawl","But to be fair, he wasn't very tall"];
var wall_end = ["He would often call"];
 
var loser_st_end = ["There was once an old schmoozer","He came to be known as quite the abuser"];
var loser_end = ["He'd often doozer"];
 
var stupid_st_end = ["There was once a man who wasn't exactly lucid","All of his claims were later disputed"];
var stupid_end = ["He concluded"];
 
var winning_st_end = ["There was once a man who'd keep singing","Yeah, he was crazy from the beginning"];
var winning_end = ["He'd be grinning"];
 
var list = ['news', 'bad', 'wall', 'loser', 'stupid', 'winning'];
var stList = [news_st_end, bad_st_end, wall_st_end, loser_st_end, stupid_st_end, winning_st_end];
var endList = [news_end, bad_end, wall_end, loser_end, stupid_end, winning_end];
 
function preload(){
    winning = loadStrings('trump_winning.txt');
    wall = loadStrings('trump_wall.txt');
    stupid = loadStrings('trump_stupid.txt');
    sad = loadStrings('trump_sad.txt');
    bad = loadStrings('trump_bad.txt');
    news = loadStrings('trump_fake_news.txt');
    loser = loadStrings('trump_loser.txt');
}
 
 
 
function setup()
{ var strings = [news, bad, wall, loser, stupid, winning];
  createCanvas(300, 300);
  background(255);
  fill(255);
  var json = {};
  var temp;
 
 
  for(var i=0; i<list.length; i++){
    for(var j=0; j<list.length; j++){
        if(i==j){continue;}
            temp = make_limerick(strings[i], strings[j], stList[i], endList[j], list[i]+'+'+list[j]);
            json[list[i]+'+'+list[j]] = temp;
    }
  }
 
  saveJSON(json, 'master');
}
 
function draw(){
 
}
 
 
function make_limerick(A, B, A_st_end, B_end,name){
    var lower = Math.min(A.length, B.length);
    result = [];
 
    for(var i=0; i<lower; i++){
        var temp = A_st_end[0] + "," + "\n";
        temp += "He'd say, " + '"' + cap_first(A[i].trim()) + '"' + "," + "\n";
        temp += B_end[0] + "," + "\n";
        temp += '"' + cap_first(B[i].trim()) + '"';
        temp += "," + "\n";
        temp += A_st_end[1] + "."
        console.log(temp);
        result.push(temp);
    }
    var txt = parse('%s.json', name);
    return result;
}
 
function cap_first(string) 
{
    return string.charAt(0).toUpperCase() + string.slice(1);
}
 
function parse(str) {
    var args = [].slice.call(arguments, 1),
        i = 0;
 
    return str.replace(/%s/g, function() {
        return args[i++];
    });
}
 
function clean_up (stuff){
    var master = '';
    var expression1 = /[-a-zA-Z0-9@:%._\+~#=]{2,256}\.[a-z]{2,6}\b([-a-zA-Z0-9@:%_\+.~#?&//=]*)/; 
    var expression2 = /(https?:\/\/(?:www\.|(?!www))[a-zA-Z0-9][a-zA-Z0-9-]+[a-zA-Z0-9]\.[^\s]{2,}|www\.[a-zA-Z0-9][a-zA-Z0-9-]+[a-zA-Z0-9]\.[^\s]{2,}|https?:\/\/(?:www\.|(?!www))[a-zA-Z0-9]\.[^\s]{2,}|www\.[a-zA-Z0-9]\.[^\s]{2,})/;
    var expression3 = /(@[A-Za-z0-9]+)|([^0-9A-Za-z \t])|(\w+:\/\/\S+)/;
    var expression4 = /^[ \t]+|[ \t]+$/; 
    var expression5 = /a@(foo|bar|baz)\b/; 
 
 
    var regex1 = new RegExp(expression1);
    var regex2 = new RegExp(expression2);
    var regex3 = new RegExp(expression3);
    var regex4 = new RegExp(expression4);
    var regex5 = new RegExp(expression5);
 
 
    for(var i=0; i < 100; i++){ var cur_text = stuff[i].text; var rs = new RiString(cur_text); rs.replaceAll(regex1, " "); rs.replaceAll(regex2, " "); rs.replaceAll(regex3, " "); rs.replaceAll(regex4, ""); rs.toLowerCase(); if(rs._text.search("stupid") != -1 ){ var tokens = RiTa.tokenize(rs._text); var badIndex = tokens.indexOf("stupid"); if(badIndex > 5){
                tokens = tokens.slice(badIndex-10, badIndex+1);
                var token_string = ''
 
                for(var j=0; j<tokens.length; j++){
                    token_string += tokens[j];
                    token_string += ' ';
                }
 
                master+=token_string;
                master+= '\n'
            }          
        }      
    }
 
    return master;
}

Code - Basil.js

#include "../../bundle/basil.js";
 
// Version for basil.js v.1.1.0
// Load a data file containing your book's content. This is expected
// to be located in the "data" folder adjacent to your .indd and .jsx. 
// In this example (an alphabet book), our data file looks like:
// [
//    {
//      "title": "A",
//      "image": "a.jpg",
//      "caption": "Ant"
//    }
// ]
var jsonString;
var jsonData;
 
//--------------------------------------------------------
function setup() {
 
  // Load the jsonString. 
  jsonString = b.loadString("master.json");
 
 
 
  // Clear the document at the very start. 
  b.clear (b.doc());
 
 
 
var offset = 0;
 
for(var f=0; f<25; f++){
 
    // Make a title page.
 
    var pageNum = 0;
    b.addPage();
    var birds = generateBirds();
    b.fill(0,0,0);
    b.textSize(30);
    b.textFont("Helvetica","Bold"); 
    b.textAlign(Justification.LEFT_ALIGN); 
    b.text("Build Limericks Not Walls", 100,390,450,36);
 
    var obj = b.JSON.decode(jsonString);
    var list = ['news', 'bad', 'wall', 'loser', 'stupid', 'winning'];
 
    for(var i=0; i<6; i++){
      b.textFont("Helvetica","Regular");
      b.textSize(16);
      var curBroadTopic = list[i]; 
      for(var j=0; j<4; j++){     
        var rand = Math.floor(Math.random() * 6);
        while(rand == i){
          rand = Math.floor(Math.random() * 6);
        }
        var curSpecificTopic = curBroadTopic + "+" + list[rand];
        var curValList = obj[curSpecificTopic];
        var randomVal = Math.floor(Math.random() * curValList.length-1);
        curVal = curValList[randomVal];
          if(j%2 == 0){
            b.addPage();
            pageNum += 1;
            moveBirds(birds,pageNum);         
            b.textFont("Helvetica","Bold");
            var curTopicSplit = curSpecificTopic.split('+');
            b.textSize(16);
            b.text(curTopicSplit[0].toUpperCase(),60,60,400,100);
            b.textSize(16);
            b.textFont("Helvetica","Regular");
            while(curVal === undefined){
              var randVal = Math.floor(Math.random() * curValList.length-1);
              curVal = curValList[randVal];
            }
            b.text(curVal,60,150,500,200);
          }
          else{
            while(curVal === undefined){
              var randVal = Math.floor (Math.random() * curValList.length-1);
              curVal = curValList[randVal];
            }
            b.text(curVal,60,400,500,200);
          }   
      }   
 
    offset+= 1;
    // b.savePDF(f.toString());
    // for(var i=0; i<13; i++){
    //   b.removePage();
    // }
  }
 
}
};
 
function randomN(seed) {
    var x = Math.sin(seed++) * 10000;
    return x - Math.floor(x);
}
 
 
function generateBirds(){
  var birds = [];
  var posx;
  var posy;
 
  for(var i=0; i<7; i++){
 
      posx = Math.floor(randomN(i) * 436);
      posy = Math.floor(randomN(1000-i) * 300);
 
      birds.push([posx, posy]);
      var angle = Math.atan((648-posy)/(432-posx));
      b.pushMatrix();
      b.noStroke();
      b.rotate(angle);
      var anImage = b.image("twit.png", posx, posy, 15, 15);
      anImage.fit(FitOptions.PROPORTIONALLY);
      b.opacity(anImage, 50);   
      b.popMatrix();  
  }  
  return birds;
}
 
function moveBirds(birds,pageNum){
  for(var i=0; i<birds.length; i++){
    var curBird = birds[i];
    var posx = curBird[0]+(30*pageNum);
    var posy = curBird[1]+(50*pageNum);
    var angle = Math.atan((576-posy)/(400-posx));
    b.pushMatrix();
    b.noStroke();
    b.rotate(angle);
    var anImage = b.image('twit.png', posx, posy, 15, 15);
    anImage.fit(FitOptions.PROPORTIONALLY);
    b.opacity(anImage, 50);
    b.popMatrix();
  }
}
// This makes it all happen:
b.go();

 

sapeck-Book

Antisemitic Absurdities
A list of antisemitic generalizations applied to show absurdity
https://drive.google.com/file/d/1qWCuH2fUSKBwqD2ek9eJw3ZtDVWXcxbP/view

In response to the attack on the Tree of Life Synagogue in Pittsburgh, PA, I created a book to show the absurdity of antisemitic sentiments. When I attended religious school at my synagogue many years ago, the Anti-Defamation League (ADL) would visit and give talks on antisemitism and how to identify it. These never really resonated with me, as I had never experienced any antisemitism beyond bullying at school or playfully-intended stereotyping. This incident was the first time I had experienced someone who really did not like my people.
First, I searched for antisemitic data. This involved an email to the ADL (who have a giant database of antisemitic Tweets), a post on 4chan, and lots of Twitter scraping. I settled on scraping Twitter for tweets with the exact phrase "Jews are." This captures only generalizations about the Jewish people. Tweets consisted of antisemitic remarks and responses to antisemitic remarks. I then filtered out tweets pertaining to Israel or certain people (ex. Soros); those issues can be polarizing and deviate from my goal of showing that making generalizations about an ethnic group is absurd. There were very few Tweets about Judaism as a religious practice. All of the Tweets pertained to how the Jewish people fit into the world.
Next, I gathered a list of ethnic groups from Wikipedia. I replaced each instance of "Jews are" in the Tweets with a random ethnic group. I showed the modified text on the the adjacent page, where the "Jews are" side is black with white text and the opposite side is white with black text. I think that it becomes more absurd and in some cases more relatable.
Lastly, I ordered the tweets by the first word. I start with "Jews are." The first few first words after that are ordered by incresingly narrow generality: "all," "American," "some," "only," "these." Next, I try to create logic with the order: "because" tries to answer a question in "how" and "but" tries to make an exception in "because." I finish with "You" and a colophon.

The code for this project consists of more than a dozen files (full NodeJS project with compilation, Python scraper, BasilJS jsx, etc.), so I have compressed it into a ZIP file:
sapeck-07-book-code.zip

chaine-parrish

I love Allison Parrish's comparison of literature with space exploration in that there are places even in literature that are mostly unexplored because it is "taboo" such as books that only repeat a single word or speak in a made up generated language. This particular point stuck with me because it made me realize what other fields, not just space and literature, have this exciting opportunity. It makes me imagine what it would be like to explore vastly different fields with automatic systems, programs, or robots in ways which people had never thought of or thought was worthy of much exploration.

 

yuvian-book

click here for an example pdf

click here for zip file of all 25 pdfs

"fortune cookies"

randomly generated advertisements for chinese restaurants with their own wacky fortune cookies

demonstrated through a p5js sketch:


Initially, I had no idea what I wanted to do for this project. I was really impressed by the examples we were shown in class and knew that I wanted to incorporate both generative text and generative imagery. In fact, I liked Lingdong's approach in his project "Fauna of Sloogia" where he focused more on generative imagery and the generative text aspect of his book was simply the names he gave to each generated creature.

With this in mind, I first thought of images I could generate. I needed objects that came in large batches/quantities but were not all exactly alike. Items that fit this category included snowflakes, fruits, etc. but these seemed unimaginative. Finally, I thought of fortune cookies and this idea immediately tied into what I wanted to create through generative text - strange fortunes and Chinese restaurant names - and from there, I developed a comprehensive plan for my generative chapter.


Process

This project was split into four parts: the fortune cookie, the takeout box, the restaurant name, and the fortune.

1. The fortune cookie:

For the fortune cookie, I first sketched a few cookies and determined the vertices they all had in common. I gave each cookie 9 vertices and graphed them on a 500x500 p5js canvas.



From there, I played around with bezier curves and shading to create each fortune cookie. Every cookie's vertices and bezier curves are randomized and thus generated by the computer every time.

2. The takeout box:

For the takeout box, I had a very similar approach as I did with the cookie; I sketched takeout boxes, determined the common vertices, and randomized the vertices within p5 so that no two takeout boxes would look alike. In addition, I added text onto the box. Two phrases - comprised of "Thank You" or "Enjoy" - appeared at random points on the box.


3. The restaurant name:
For the generated Chinese restaurant names, I drew inspiration from a discussion I had a few weeks ago about Chinese restaurants commonly being composed of similar words and language.

I put more thought and research into this idea and came up with five categories of words commonly found in Chinese restaurant names: places, adjective, nouns, food, and location. And depending on the length of the name, I would randomly choose one word from each of these categories in a predetermined order and generate names.

places: "Beijing", "Peking", "Szechuan", "Shanghai", "Hunan", "Canton", "Hong Kong", "Taipei", "China", "Taiwan", "Formosa"

adjectives: "Lucky", "Golden", "Gourmet", "Imperial", "Oriental", "Grand", "Mandarin", "Supreme", "Royal", "East", "Old", "Happy", "Hot", "Chinese"

nouns: "Cat", "Moon", "Sun", "Dragon", "Star", "Roll", "Panda", "Bamboo", "Chef", "King", "Empire", "Empress", "Emperor", "Phoenix", "Lion", "Tiger", "Jade", "Pearl"

foods: "Seafood", "Noodle", "Dim Sum", "Hot Pot", "Rice", "Ramen", "Hibachi"

locations: "Palace", "Garden", "Cafe", "Bistro", "Kitchen", "Restaurant", "Buffet", "House", "Wok", "Bowl", "Grill", "Cuisine", "Express"

For example, if I wanted to generate a Chinese restaurant name that was four words long and had a word order of adjective-food-place-location one example would be "Golden Noodle Szechuan Kitchen".

To finalize this, I gave each word length (from two words long to five words long) a chance of 20% (i.e. the chance of the generator returning a three word long name and a five word long name were both 20%). And within each word length, I thought of word orderings and manipulated the chances of each ordering.

To demonstrate the Chinese Restaurant Name Generator in action, click on the following sketch to generate names:


4. The fortune:
To generate the fortune, I first wrote down each fortune with blanks and filled in the blanks with random nouns, adverbs, adjectives, verbs, etc. using RiTa.

code

 
// COOKIE
// points for the cookie
var x1, x2, x3, x4, x5, x6, x7, x8, x9; 
// TAKEOUT BOX
//variables for box points
var tx1,ty1,tx2,ty1,tx3,ty3,tx4,ty4,tx5,ty5,tx6,ty6,tx7,ty7;
//visible flap vertex points
var fx, fy;
//variablesfor handle points
var hx1, hy1, hx2, hy2, hx3, hy3, hx4, hy4;
//RiTA stuff
var rg;
 
var name, lengthChance, typeChance; // variables for generated restaurant name
var myFont; // custom font
var luckyNums = []; // array of lucky numbers
var prediction = ''; // fortune cookie fortune/prediction
var phoneNumber = ''; // phone number
 
function preload() {
    myFont = loadFont('andale-mono.otf');
    chineseFont = loadFont('chinese.ttf')
}
 
function setup() {
  createCanvas(500, 680);
  background(239, 50, 40); // red color
  background(240);
  noLoop();
 
  // assign values to lengthChance and typeChance
  lengthChance = random(0,100);
  typeChance = random(0,100);
 
  // generate restaurant name
  name = generateRestaurantName();
  // display restaurant name text
  displayRestaurantName();
 
  // generate phone number
  generatePhoneNumber();
  drawPhoneNumber();
 
  // draw slip of paper
  drawPaper();
 
  // generate lucky numbers
  generateLuckyNumbers();
  drawLuckyNumbers();
  // display the fortune
  drawFortune();
 
  // generate and draw box
  generateBox();
  drawBox();
 
  // generate cookie 
  generateCookie();
  // draw the cookie
  drawCookie();
 
  prediction = generateFortune();
 
  // button to download json file
  createJSONFile();
 
}
 
function mousePressed() { // generate new cookie, restaurant name, and fortune on mouse press
 setup(); 
}
 
function generateRestaurantName() { // returns string of generated Restaurant name
  name = "";
 
  // Places 11
  var places = ["Beijing", "Peking", "Szechuan", "Shanghai", "Hunan", "Canton", "Hong Kong", "Taipei", "China", "Taiwan", "Formosa"]
  // Adjectives 14
  var adj = ["Lucky", "Golden", "Gourmet", "Imperial", "Oriental", "Grand", "Mandarin", "Supreme", "Royal", "East", "Old", "Happy", "Hot", "Chinese"] 
  // Nouns 18
  var noun = ["Cat", "Moon", "Sun", "Dragon", "Star", "Roll", "Panda", "Bamboo", "Chef", "King", "Empire", "Empress", "Emperor", "Phoenix", "Lion", "Tiger", "Jade", "Pearl"]
  // Food 7
  var food = ["Seafood", "Noodle", "Dim Sum", "Hot Pot", "Rice", "Ramen", "Hibachi"]
  // Last words 13
  var last = ["Palace", "Garden", "Cafe", "Bistro", "Kitchen", "Restaurant", "Buffet", "House", "Wok", "Bowl", "Grill", "Cuisine", "Express"];
 
  // Generate some random names
  if (lengthChance >= 0 && lengthChance <= 25) { // two word length
      if (typeChance >= 0 && typeChance <= 17) {
        name += places [floor(random(11))] + " ";
        name += noun [floor(random(18))];
      }
      else if (typeChance > 17 && typeChance <= 34) {
        name += adj [floor(random(14))] + " ";
        name += noun [floor(random(18))];
      }
      else if (typeChance > 34 && typeChance <= 51) {
        name += food [floor(random(7))] + " ";
        name += last [floor(random(13))];
      }
      else if (typeChance > 51 && typeChance <= 68) {
        name += adj [floor(random(14))] + " ";
        name += last [floor(random(13))];
      }
      else if (typeChance > 68 && typeChance <= 85) {
        name += adj [floor(random(14))] + " ";
        name += food [floor(random(7))];
      }
      else if (typeChance > 85) {
        name += places [floor(random(11))] + " ";
        name += last [floor(random(13))];
      }
 
    } 
    else if (lengthChance > 25 && lengthChance <= 50) { // three word length
      if (typeChance >= 0) {
        name += places [floor(random(11))] + " ";
        name += noun [floor(random(18))] + " ";
        name += last [floor(random(13))];
      }
      else if (typeChance > 17 && typeChance <= 34) {
        name += adj [floor(random(14))] + " ";
        name += noun [floor(random(18))] + " ";
        name += last [floor(random(13))];
      }
      else if (typeChance > 34 && typeChance <= 51) {
        name += adj [floor(random(14))] + " ";
        name += places [floor(random(11))] + " ";
        name += noun [floor(random(18))];
      }
      else if (typeChance > 51 && typeChance <= 68) {
        name += adj [floor(random(14))] + " ";
        name += noun [floor(random(18))] + " ";
        name += food [floor(random(7))];
      }
      else if (typeChance > 68 && typeChance <= 85) {
        name += places [floor(random(11))] + " ";
        name += food [floor(random(7))] + " ";
        name += last [floor(random(13))];
      }
      else if (typeChance > 85 ) {
        name += adj [floor(random(11))] + " ";
        name += food [floor(random(7))] + " ";
        name += last [floor(random(13))];
      }
    } 
    else if (lengthChance > 50 && lengthChance <= 75 ) { // four word length
      if (typeChance >= 0 && typeChance <= 20) {
        name += places [floor(random(11))] + " ";
        name += adj [floor(random(14))] + " ";
        name += noun [floor(random(18))] + " ";
        name += last [floor(random(13))];
      }
      else if (typeChance > 20 && typeChance <= 40) {
        name += places [floor(random(11))] + " ";
        name += adj [floor(random(14))] + " ";
        name += food [floor(random(7))] + " ";
        name += last [floor(random(13))];
      }
      else if (typeChance > 40 && typeChance <= 60) {
        name += adj [floor(random(14))] + " ";
        name += noun [floor(random(18))] + " ";
        name += food [floor(random(7))] + " ";
        name += last [floor(random(13))];
      }
      else if (typeChance > 60 && typeChance <= 80) {
        name += places [floor(random(11))] + " ";
        name += noun [floor(random(18))] + " ";
        name += food [floor(random(7))] + " ";
        name += last [floor(random(13))];
      }
      else if (typeChance > 80) {
        name += adj [floor(random(14))] + " ";
        name += food [floor(random(7))] + " ";
        name += places [floor(random(11))] + " ";
        name += last [floor(random(13))];
      }
    }
    else if (lengthChance > 75) { // five word length
      if (typeChance >= 0 && typeChance <= 40) {
        name += places [floor(random(11))] + " ";
        name += adj [floor(random(14))] + " ";
        name += noun [floor(random(18))] + " ";
        name += food [floor(random(7))] + " ";
        name += last [floor(random(13))];
      }
      else if (typeChance > 40 && typeChance <= 60) {
        name += adj [floor(random(14))] + " ";
        name += food [floor(random(7))] + " ";
        name += noun [floor(random(18))] + " ";
        name += places [floor(random(7))] + " ";
        name += last [floor(random(13))];
      }
      else if (typeChance > 60) {
        name += adj [floor(random(14))] + " ";
        name += noun [floor(random(18))] + " ";
        name += places [floor(random(11))] + " ";
        name += food [floor(random(7))] + " ";
        name += last [floor(random(13))];
      }
    }
    return name;
}
 
function displayRestaurantName() {
  textAlign(LEFT);
  textFont(myFont);
  fill(0);
  textSize(12);
	noStroke();
  text(name, 40, 50);
}
 
function generateCookie() { // generate points for the cookie
  // generate random points for fortune cookie
  x1 = width/2 - random(15,25);
  y1 = random(50,80);
  x2 = random(60, 78);
  y2 = random(225, 245);
  x3 = x2 - random(10,15);
  y3 = random(360, 380);
  x4 = 260;
  y4 = random(420,440);
  x5 = 255;
  y5 = random(300,350);
  x6 = 252;
  y6 = y5 - 65;
  x7 = 250;
  y7 = random(140,160);
  x8 = random(380, 390);
  y8 = random(390, 420);
  x9 = x8 + random(20,35);
  y9 = y2 - 20;
 
  // bezier vertices
  b1x = x1 - random(90,130);
  b1y = y1 + random(8,13);
  b2x = x2 + random(40,50);
  b2y = y2 - random(40,60);
  b3x = x5 + random(5, 8);
  b3y = (y5 - y6) * 0.9 + y6
  b4x = x6 - random(25,35);
  b4y = random(155,170);
  b5x = x7 + random(15,20);
  b5y = y7 + random(20,30);
  b6x = x6 + random(5,15);
  b6y = y6 - random(25,40);
}
 
function drawCookie() {
  push();
  translate(80,280);
  scale(0.35);
 
  stroke(0);
  // fill(247, 237, 185); // beige color
  fill(255);
  strokeWeight(4);
 
  // left half of cookie
  beginShape();
  vertex(x1, y1);
  bezierVertex(b1x, b1y, b2x, b2y, x2, y2);
  vertex(x2, y2);
  bezierVertex(x2,y2,x3,y3,x4,y4);
  vertex(x4, y4);
  vertex(x5, y5);
  bezierVertex(x5, y5, b3x, b3y, x6,y6);
  endShape();
 
  // right half of cookie
  beginShape();
  vertex(x5,y5);
  bezierVertex(x5,y5,(x5+x8)/2 - 10,(y5+y8)/2 + 10,x8,y8);
  vertex(x8, y8);
  bezierVertex(x8,y8, (x9-x8)*1.4+x8, (y8 - y9) * 0.9 + y9 ,x9,y9);
  vertex(x9, y9);
  bezierVertex(x9-80, y9 -50, x1 + 90, y1 + 20, x1, y1);
  vertex(x1, y1);
  endShape();
 
  // inner crease
  beginShape();
  fill(0);
  vertex(x6,y6);
  bezierVertex(x6,x6,b4x,b4y,x7,y7);
  vertex(x7,y7);
  bezierVertex(b5x,b5y,b6x,b6y,x6,y6);
  vertex(x6,y6);
  endShape();
 
  // left fold of cookie
  fill(0);
  beginShape();
  vertex(x2, y2);
  bezierVertex(x2,y2,x3,y3,x4,y4);
  vertex(x4, y4);
  bezierVertex(x4, y4, x4-20, y4-35, (x2+x4)/2 - 15, (y2+y4)/2 + 15); 
  bezierVertex((x2+x4)/2 - 15, (y2+y4)/2 + 15, x2 +10, y2 +80, x2, y2);
  vertex(x2,y2);
  endShape();
 
  // line connecting inner black ellipse and left vertex
  strokeWeight(3);
  line(x6,y6,x5,y5);
 
  pop();
}
 
function generatePrediction() {
 
}
 
function generateLuckyNumbers() {
  for (var i = 0 ; i < 6; i++) {
    luckyNums[i] = String(floor(random(0,100)));
  }
  return luckyNums;
}
 
function drawLuckyNumbers() {
  luckynum1 = luckyNums[0];
  luckynum2 = luckyNums[1];
  luckynum3 = luckyNums[2];
  luckynum4 = luckyNums[3];
  luckynum5 = luckyNums[4];
  luckynum6 = luckyNums[5];
 
  fill(0);
  textSize(12);
  textFont(myFont);
	noStroke();
	textAlign(CENTER);
  text("Lucky Numbers: " + luckynum1 + "  " + luckynum2 + "  " + luckynum3 + "  " + luckynum4 + "  " + luckynum5 + "  " + luckynum6, width/2, 545);
}
 
function generatePhoneNumber() {
  phoneNumber = "(" + String(floor(random(1,10))) + String(floor(random(0,10))) + String(floor(random(0,10))) + ")-" + String(floor(random(0,10))) + String(floor(random(0,10))) + String(floor(random(0,10))) + "-" + String(floor(random(0,10))) + String(floor(random(0,10))) + String(floor(random(0,10))) + String(floor(random(0,10)))
}
 
function drawPhoneNumber() {
  textAlign(LEFT);
  textFont(myFont);
  fill(0);
  textSize(12);
	noStroke();
  text("Call " + phoneNumber + " to order!", 40, 80);
}
 
function generateBox() {
  // box points
  tx1 = random(100,140);
  ty1 = random(150,190);
  tx2 = random(140,180);
  ty2 = random(430,500);
  tx3 = random(250,300);
  ty3 = ty2 + random(80,100);
  tx4 = random(480,510);
  ty4 = random(470,500);
  tx5 = random(500,530);
  ty5 = random(140,180);
  tx6 = random(320,360);
  ty6 = random(90,120);
  tx7 = tx3;
  ty7 = ty1 + random(40,60);
 
  //flap points
  fx = random((tx1 + 45)-15, (tx1 + 45) + 15);
  fy = random((ty1 + 120) - 10, (ty1 + 120) + 10);
 
  //handle points
  hx1 = floor((tx1 + tx7) / 2)
  hy1 = floor((ty1 + ty7) / 2) + 40;
  hx2 = hx1 - random(3,7);
  hy2 = hy1 - random(120,160);
  hx4 = floor((tx6 + tx5)/2)
  hy4 = floor((ty6 + ty5)/2)
  hx3 = floor((tx6 + tx5)/2)
  hy3 = hy4 - random(70,100);
}
 
function drawBox() {
  push();
  scale(0.5);
  translate(260,300);
  fill(255);
  strokeWeight(2.6);
  stroke(0);
 
  //left face
  beginShape();
  vertex(tx1, ty1);
  vertex(tx2, ty2);
  vertex(tx3, ty3);
  vertex(tx7, ty7);
  vertex(tx1, ty1);
  endShape();
 
  //right face
  beginShape();
  vertex(tx3,ty3);
  vertex(tx4,ty4);
  vertex(tx5, ty5);
  vertex(tx7, ty7);
  vertex(tx3,ty3);
  endShape();
 
  //left folds
  //back flap
  beginShape();
  vertex(tx1,ty1);
  vertex(235,310);
  vertex(tx2, ty2);
  vertex(tx1, ty1);
  endShape();
  //front flap
  beginShape();
  vertex(tx7,ty7);
  vertex(fx,fy);
  vertex(tx3, ty3);
  vertex(tx3, ty3);
  vertex(tx7, ty7);
  endShape();
 
  //top face
  beginShape();
  vertex(tx5,ty5);
  vertex(tx6,ty6);
  vertex(tx1,ty1);
  vertex(tx7,ty7);
  vertex(tx5,ty5);
  endShape();
 
  //handle
  //left vertical
  strokeWeight(3);
  // line(hx1,hy1,hx2,hy2);
  beginShape();
  noFill();
  vertex(hx1, hy1);
  vertex(hx2,hy2+20);
  bezierVertex(hx2, hy2+20, hx2, hy2, hx2+20, hy2)
  vertex(hx2+20, hy2)
  vertex(hx3 - 20,hy3)
  bezierVertex(hx3-20, hy3, hx3, hy3, hx3, hy3+20)
  vertex(hx3, hy3+20)
  vertex(hx4, hy4)
  endShape();
  //top horizontal
  // line(hx2,hy2,hx3,hy3);
 
  drawMessages();
 
  pop();
}
 
function drawMessages() {
  var messages = ["ENJOY", "THANK YOU"];
  var i1 = floor(random(2));
  var i2 = floor(random(2));
  var m1 = messages[i1];
  var m2 = messages[i2];
  textFont(chineseFont);
  noStroke();
  fill(0);
  textAlign(CENTER);
  textSize(25);
 
  x1 = floor((tx5-tx7)/2 + random(200,270));
  y1 = floor((ty4-ty3)/2) + random(250,300);
  x2 = x1 + random(60,90);
  x2 = constrain(x2, tx1, tx4);
  y2 = y1 + random(90,140);
  y2 = constrain(y2, ty7, ty4);
  r1 = random(HALF_PI/10,HALF_PI/6)
  r2 = random(-HALF_PI/10, HALF_PI/10)
 
  push();
    rotate(r1);
    text(m1, x1, y1);
  pop();
 
  push();
    rotate(r2);
    text(m2, x2, y2);
  pop();
}
 
function drawPaper() {
  rectMode(CENTER)
  stroke(0);
  fill(210);
  rect(width/2, 525, 480, 90);
}
 
function generateFortune() {
  rg = new RiGrammar();
 
  //baseline for fortune cookie fortunes
  rg.addRule('<start>', 'Whoever <V-Singular-Present> a <N-Singular> will never be <V-Past> \nby a <N-Singular>.', 1);
  rg.addRule('<start>', 'Life is too short to <V-Plural-Present> <N-Plural>.', 1);
  rg.addRule('<start>', 'Your greatest strength is that you are <Adjective>.', 1);
  rg.addRule('<start>', 'Your future seems <Adverb> <Adjective>.', 1);
  rg.addRule('<start>', 'Alas, life is but a <Adjective> <N-Singular>.', 1);
  rg.addRule('<start>', 'Your <N-Singular> shines on another.', 1);
  rg.addRule('<start>', 'You will overcome <Adjective> <N-Plural>.', 1);
  rg.addRule('<start>', 'It is not necessary to <V-Plural-Present> others your <N-Singular>; \nit will be obvious.', 1);
  rg.addRule('<start>', 'Sometimes you just need to <V-Plural-Present> the <N-Singular>.', 1);
  rg.addRule('<start>', 'See if you can <V-Plural-Present> anything from the <N-Plural>.', 1);
  rg.addRule('<start>', 'Make the <N-Singular> <V-Plural-Present> for you, not the other way around.', 1);
  rg.addRule('<start>', 'In the eyes of <N-Plural>, everything is <Adjective>.', 1);
  rg.addRule('<start>', 'People in your surroundings will be more <Adjective> than usual.', 1);
  rg.addRule('<start>', 'You will be successful at <V-ing> <N-Plural>.', 1);
  rg.addRule('<start>', 'Whenever possible, keep it <Adjective>.', 1);
  // rg.addRule('<start>', '', 1);
 
  var args1 = {
    tense: RiTa.PRESENT_TENSE,
    number: RiTa.SINGULAR,
    person: RiTa.THIRD_PERSON
  };
  var args2 = {
    tense: RiTa.PRESENT_TENSE,
    number: RiTa.PLURAL,
    person: RiTa.THIRD_PERSON
  };
  var args3 = {
    tense: RiTa.PAST_TENSE,
    number: RiTa.SINGULAR,
    person: RiTa.THIRD_PERSON
  };
 
  //nouns
  rg.addRule('<N-Singular>', RiTa.randomWord("nn"));
  rg.addRule('<N-Plural>', RiTa.randomWord('nns'))
 
  //verbs
  var v = RiTa.randomWord('vb');
  rg.addRule('<V-Singular-Present>', RiTa.conjugate(v, args1));
  rg.addRule('<V-Plural-Present>', RiTa.conjugate(v, args2));
  rg.addRule('<V-Past>', RiTa.conjugate(v, args3));
  rg.addRule('<V-ing>', RiTa.randomWord('vbg'));
 
  //adjective
  rg.addRule('<Adjective>', RiTa.randomWord('jj'));
 
  //adverb
  rg.addRule('<Adverb>', RiTa.randomWord('rb'));
 
  //preposition
  // rg.addRule('<Preposition>', RiTa.randomWord('in'));
 
  result = rg.expand();
  return result;
}
 
function drawFortune() {
  fill(0);
  textSize(12);
  textFont(myFont);
	noStroke();
	textAlign(CENTER);
  text(prediction, width/2, 515);
}
 
function createJSONFile() {
  // Create a JSON Object, fill it with the restaurants.
  var myJsonObject = {};
  myJsonObject.restaurantName = name;
  myJsonObject.phoneNumber = phoneNumber;
  myJsonObject.prediction = prediction;
  myJsonObject.luckyNumbers = luckyNums;
 
  // Make a button. When you press it, it will save the JSON file
  createButton('save')
    .position(width/2-20, height-50)
    .mousePressed(function() {
      saveJSON(myJsonObject, 'data.json');
    });
}

yalbert-book

Gender Bended Classics: Your favorite classics populated by a cast of familiar yet drastically different characters.

All files: yalbert-pdfs

Read a chapter here.

Overview: For this project, I wanted to explore gender expectations. In particular, I wanted to highlight how strongly held they can be without us even realizing it. In order to do this, I took classical texts, such as The Great Gatsby, Pride and Prejudice, and Mary Poppins, and switched the genders of all of the characters.

How I did it: If I wanted to create an ideal gender switching program, I would probably want to use some form of machine learning. However, a relatively simple find and replace algorithm, which I used for this project, works pretty well with minimal code. There are two main parts to my gender bending program, pronouns and names. The pronoun aspect is pretty straightforward. I simply compiled a dictionary of common pronouns (he : she, him : her, etc) and used it to switch out a word with it's opposite gender equivalent when I come across it in the text. There are still a few sticking points, but overall this works really well. The second element, name flipping, was significantly more difficult. Initially, I simply referenced a corpus of names to find the opposite gendered equivalent purely based on the Levenshtein distance between the original name and its opposite gendered candidate. However this lead to a lot of obscure names being used.

The effect it had: Even with an extremely imperfect gender switching algorithm, I'm really happy with the result of the project. You don't realize how little you associated men with nannies until you've read an excerpt from Marcus Poppins, or what the American dream means through the eyes of a woman until you've heard the tale of Jayla Gatsby. The wonderful thing about books is that the switch isn't immediately relevant. You'll will skim a more or less normal novel until you realize that something is off. Once you discover why the book seemed strange, you're left to wonder why you thought it was odd for a man to be a nanny or a woman to accrue wealth in order to win back a lost love.

Next steps: The algorithm, particularly the name replacements, still need a lot of work. I just found the US census results of the most popular names, arranged in ascending order, for every year since 1880. I'm trying to use this to generate more period relevant names that are selected based on both Levenshtein distance and popularity, instead of purely the former. I'd like to keep improving this algorithm until I can generate texts that are more convincing than what I currently have.

Text processing python script:

import random
from random import shuffle
from os import listdir
from os.path import isfile, join
punctuations = ["","'s", ".", ":", ",", "!", "?", ";"]
quotations = ["'", '"', "(", ")", '“','”']
splitters = {"\n", "-"}
 
 
def makeNameSets(year = 2017):
    path = "names/"
    files = [f for f in listdir(path) if isfile(join(path, f))]
    nameFile = getRightFile(year)
    contents = open("names/" + nameFile, "r")
    namesArr = contents.read().split("\n")
    femaleNames = dict()
    maleNames = dict()
    for namePkg in namesArr:
        sliced = namePkg.split(",")
        if(len(sliced) == 3):
            name, gender, pop = sliced
            if(gender == "F"):
                if(name[0] in femaleNames):
                    letterDict = femaleNames[name[0]]
                else:
                    letterDict = dict()
                    femaleNames[name[0]] = letterDict
            else:
                if(name[0] in maleNames):
                    letterDict = maleNames[name[0]]
                else:
                    letterDict = dict()
                    maleNames[name[0]] = letterDict
            letterDict[name] = int(pop)
    return(femaleNames, maleNames)  
 
def getRightFile(year):
    path = "/Users/Maayan/Google Drive/year 4.0 : senior fall/golan intermediate studio/07-book/gender flipper/names/"
    files = [f for f in listdir(path) if isfile(join(path, f))]
    files = sorted(files)
    return files[binSort(year, files, 1, len(files))]
 
def binSort(year, files, lowerInd, upperInd):
    midInd = (upperInd - lowerInd)//2 + lowerInd
    mid = int(files[midInd][3:7])
 
    if(mid == year):
        return midInd
    elif(mid &lt; year): if(midInd == len(files)-1): return midInd else: return binSort(year, files, midInd, upperInd) else: if(midInd == 1): return midInd else: return binSort(year, files, lowerInd, midInd) def getNamesInNovel(contents, femaleNames, maleNames): names = dict() for word in contents: wordIters = [word] nameFound = None gender = None for i in range(1, 4): if(len(word) &gt;i):
                wordIters.append(word[:i*-1])
                wordIters.append(word[i:len(word)])
        for wordIter in wordIters:
            if(len(wordIter) != 0):
                firstLetter = wordIter[0]
            else:
                continue
            if firstLetter in femaleNames and wordIter in femaleNames[firstLetter].keys():
                curDict = femaleNames[firstLetter]
                if(curDict[wordIter] &gt; 50):
                    nameFound = wordIter
                    gender = "f"
                    break
            if firstLetter in maleNames and wordIter in maleNames[firstLetter].keys():
                curDict = maleNames[firstLetter]
                if(curDict[wordIter] &gt; 50):
                    nameFound = wordIter
                    gender = "m"
                    break
        if(nameFound != None):
            names[nameFound] = gender
 
    return names
 
def nameDictGenerator(contents, year):
    femaleNames, maleNames = makeNameSets(year)
    namesInNovel = getNamesInNovel(contents, femaleNames, maleNames)
    nameDict = dict()
    for name in namesInNovel.keys():
        if(namesInNovel[name] == "f"):
            nameSet = maleNames[name[0]]
            sameNameSet = femaleNames[name[0]]
        else:
            nameSet = femaleNames[name[0]]
            sameNameSet = maleNames[name[0]]
        closestName = findClosestName(name, nameSet, sameNameSet)
        if(name != "" and closestName != ""):
            addToDict(name, closestName, nameDict, True)
    return nameDict
 
def findClosestName(name, nameSet, sameNameSet):
    leastDist = None
    closestName = None
    closestNames = []
    maxDist = 3
 
    for otherName in nameSet:
        if(otherName in sameNameSet):
            continue
        if(len(name) &gt; 0 and len(otherName) &gt; 0 and name[0] != otherName[0]):
            continue
        dist = iterative_levenshtein(name, otherName)
        if(dist &lt;= 3 and otherName): closestNames.append(otherName) elif(leastDist == None or leastDist &gt; dist):
                leastDist = dist
                closestName = otherName
 
    if(len(closestNames) == 0):
        return closestName
    else:
        return findMostPopularName(closestNames, nameSet)
 
def findMostPopularName(closestNames, nameSet):
    mostPopName = None
    mostPopValue = None
    for name in closestNames:
        popValue = nameSet[name]
        if(mostPopValue == None or popValue &gt; mostPopValue):
            mostPopValue = popValue
            mostPopName = name
    return mostPopName
 
def iterative_levenshtein(s, t, costs=(1, 1, 1)):
    """ 
        iterative_levenshtein(s, t) -&gt; ldist
        ldist is the Levenshtein distance between the strings 
        s and t.
        For all i and j, dist[i,j] will contain the Levenshtein 
        distance between the first i characters of s and the 
        first j characters of t
 
        costs: a tuple or a list with three integers (d, i, s)
               where d defines the costs for a deletion
                     i defines the costs for an insertion and
                     s defines the costs for a substitution
    """
    rows = len(s)+1
    cols = len(t)+1
    deletes, inserts, substitutes = costs
 
    dist = [[0 for x in range(cols)] for x in range(rows)]
    # source prefixes can be transformed into empty strings 
    # by deletions:
    for row in range(1, rows):
        dist[row][0] = row * deletes
    # target prefixes can be created from an empty source string
    # by inserting the characters
    for col in range(1, cols):
        dist[0][col] = col * inserts
 
    for col in range(1, cols):
        for row in range(1, rows):
            if s[row-1] == t[col-1]:
                cost = 0
            else:
                cost = substitutes
            dist[row][col] = min(dist[row-1][col] + deletes,
                                 dist[row][col-1] + inserts,
                                 dist[row-1][col-1] + cost) # substitution
 
    return dist[rows-1][cols-1]
 
femaleNames, maleNames = makeNameSets()
 
def flipWholeText(textName):
    origText = open("texts/" + textName + ".txt","r")
    rawContents = origText.read()
 
    flippedContents = flip(rawContents)
 
    flippedText= open("flipped_texts/" + textName + "_flipped.txt","w+")
    flippedText.write(flippedContents)
    flippedText.close()
 
def flipExcerpt(textName, title, author, newName, year = 2018):
    origText = open("texts/" + textName + ".txt","r")
    rawContents = origText.read()
    excerptLen = 3000
    start = random.randint(0, len(rawContents) - excerptLen)
    end = start + excerptLen
 
    rawContents = title + "\nBy " + author + "\n" + rawContents[start:end]
 
    flippedContents = flip(rawContents)
 
 
    flippedText= open("../data/" + newName + ".txt","w+")
    flippedText.write(flippedContents)
    flippedText.close()
 
def customSplit(fullWord):
    minLen = None
    maxLen = None
    wordArr = [""]
    for char in fullWord:
        if(char in splitters):
            wordArr.append(char)
            wordArr.append("")
        else:
            curSubstring = wordArr[-1]
            curSubstring = curSubstring + char
            wordArr[-1] = curSubstring            
 
    return wordArr
 
 
def customCombine(wordArr):
    word = ""
    for substring in wordArr:
        word = word + substring
    return word
 
def flip(rawContents, year = 2018): 
 
    contents = rawContents.split(" ")
 
    genDict = makeGeneralDict()
    nameDict = nameDictGenerator(contents, year)
    print(nameDict)
 
    # replace any words
    for i in range(len(contents)):
        word = contents[i]
        wordArr = customSplit(word)
        for j in range(len(wordArr)):
            if(wordArr[j] != "" and wordArr[j] in genDict):
                wordArr[j] = genDict[wordArr[j]]
            if(wordArr[j] != "" and wordArr[j] in nameDict):
                wordArr[j] = nameDict[wordArr[j]]
        word = wordArr[0]
        word = customCombine(wordArr)
 
        contents[i] = word
 
 
    output = " ".join(contents)    
    return output
 
 
def dictInsert(word1, word2, d):
    words = []
 
    # add singular
    words.append(word1)
    d[word1] = word2
 
    # add plural
    words.append(word1 + "s")
    d[word1 + "s"] = word2 + "s"
 
    # add capitals of those two
    for i in range(0, 2):
        word = words[i]
        word1 = word
        word2 = d[word1]
 
        words.append(word1.capitalize())
        d[word1.capitalize()] = word2.capitalize()
 
    # add punctuation
    for word in words:
        for punctuation in punctuations:
            word1 = word + punctuation
            word2 = d[word] + punctuation
 
            d[word1] = word2
 
            for quotation in quotations:
                if(quotation == '“'):
                    d[word1 + '”'] = word2 + '”'
                    d[quotation + word1] = quotation + word2
                    d[quotation + word1 + '”'] = quotation + word2 + '”'
                else:
                    d[word1 + quotation] = word2 + quotation
                    d[quotation + word1] = quotation + word2
                    d[quotation + word1 + quotation] = quotation + word2 + quotation
 
 
 
def addToDict(word1, word2, d, oneWay = False):
    dictInsert(word1, word2, d)
    if(oneWay == False):
        dictInsert(word2, word1, d)             
 
def makeGeneralDict():
    d = dict()
 
    addToDict("he", "she", d)
    addToDict("him", "her", d)
    addToDict("his", "hers", d)
    addToDict("his", "her's", d)
    addToDict("madam", "mister", d)
    addToDict("mr", "mrs", d)
    addToDict("mr", "ms", d)
    addToDict("brother", "sister", d)
    addToDict("aunt", "uncle", d)
    addToDict("mother", "father", d)
    addToDict("mom", "dad", d)
    addToDict("ma", "pa", d)
    addToDict("husband", "wife", d)
    addToDict("king", "queen", d)
    addToDict("gentleman", "lady", d)
    addToDict("gentlemen", "ladies", d)
    addToDict("prince", "pricess", d)
    addToDict("lord", "lady", d, True)
    addToDict("baron", "baroness", d)
    addToDict("miss", "mister", d)
    addToDict("daughter", "son", d)
    addToDict("man", "woman", d)
    addToDict("men", "women", d)
    addToDict("boy", "girl", d)
    addToDict("grandmother", "grandfather", d)
    addToDict("sir", "dame", d)
    addToDict("stepmother", "stepfather", d)
    addToDict("godmother", "godfather", d)
    addToDict("himself", "herself", d)
    addToDict("mss", "mister", d, True)
    addToDict("horseman", "horsewoman", d)
    addToDict("horsemen", "horsewomen", d)
    addToDict("wizard", "witch", d)
    addToDict("warlock", "witch", d, True)
    addToDict("businessman", "businesswoman", d)
    addToDict("businessmen", "businesswomen", d)
    # addToDict("warlock", "witch", d, True)
 
 
    return d
 
books = [("harry_potter", "Harry Potter", "J. K. Rowling"),
        ("alice_in_wonderland", "Alice's Adventures in Wonderland", "Lewis Carrol"),
        ("great_expectations", "Great Expectations", "Charles Dickens"),
        ("huckleberry_finn", "Adventures of Huckleberry Finn", "Mark Twain"),
        ("jane_eyre", "Jane Eyre", "Charlotte Bronte"),
        ("jekyll_hyde", "The Strange Case of Dr. Jekyll and Mr. Hyde", "Robert Louis Stevenson"),
        ("mary_poppins", "Mary Poppins", "P. L. Travers"),
        ("oliver_twist", "Oliver Twist", "Charles Dickens"),
        ("frankenstein", "Frankenstein", "Mary Shelley"),
        ("peter_pan", "Peter Pan", "J. M. Barrie"),
        ("pride_and_prejudice", "Pride and Prejudice", "Jane Austen"),
        ("sherlock_holmes", "The Adventures of Sherlock Holmes", "Sir Arthur Conan Doyle"),
        ("the_great_gatsby", "The Great Gatsby", "F. Scott Fitzgerald"),
        ("anna_karenina", "Anna Karenina", "Leo Tolstoy")]
 
def generateExcerpts(books):
    shuffle(books)
    for i in range(14):
        corpus, title, author = books[i]
        flipExcerpt(corpus, title, author, str(i))
 
generateExcerpts(books)

Basiljs layout script:

#includepath "~/Documents/;%USERPROFILE%Documents";
#include "basiljs/bundle/basil.js";
 
function draw() {
    margin = 70
    width = 432
    height = width*3/2
    files = ["0.txt", "1.txt",
                "2.txt",
                "3.txt",
                "4.txt",
                "5.txt",
                "6.txt",
                "7.txt",                                
                "8.txt",];
 
    b.doc();
    b.clear(b.doc())
    b.textFont("Baskerville", "Regular");
    b.page(1)
 
    b.textSize(36)
    b.textFont("Baskerville", "Bold");
    b.text("Gender Bended Classics", margin, margin*1.5, width-margin*2, 100);
 
    b.textSize(12)
    b.textFont("Baskerville", "Regular");
    b.text("Generated by Maayan Albert", margin, margin*3, width-margin*2, 100); 
 
 
    b.page(2)
 
    for(i = 0; i &lt; files.length; i++){ file = files[i] content = b.loadString(file); headers = b.loadStrings(file); title = headers[0] author = headers[1] start = title.length + author.length + 1 + 1 end = 1000 firstPage = content.slice(start, end) b.textAlign(Justification.LEFT_ALIGN) b.textSize(12) b.textFont("Baskerville", "Regular"); b.text("Excerpt from:", margin, margin, width-margin*2, 100); b.textSize(24) b.textFont("Baskerville", "Bold"); b.text(title, margin, margin*1.5, width-margin*2, 100); b.textSize(12) b.textFont("Baskerville", "Regular"); if(title.length &gt; 24){
            b.text(author, margin, margin*2.4, width-margin*2, 100);           
        }
        else{
            b.text(author, margin, margin*2, width-margin*2, 100);
        }
 
        b.textSize(12)
        b.textFont("Baskerville", "Regular");
        b.text(firstPage, margin, margin*3.5, 
                width- margin*2, height-margin*4.5);
 
        secondPage = content.slice(end)
 
        b.page(b.pageNumber()+1)
        b.text(secondPage, margin, margin, width-margin*2, height-margin*2-margin*.5);
 
 
        b.textSize(24)
        b.textFont("Baskerville", "Regular");
        b.textAlign(Justification.CENTER_ALIGN)
        b.text(". . .", margin, height-margin*1.35, width-margin*2, height-margin*.5);
        b.page(b.pageNumber()+1)
 
    }
 
}
 
b.go();

harsh-parrish

Being a connoisseur of space exploration, robotics, and linguistics, this video hit right home with me. There's one thing, in particular, I'd like to mention here however, and that is the beautiful use of metaphor by Allison. The use of 'bots' we're sending out on a journey into linguistic space and who are sending us signals back from their exploration is an incredibly powerful one in my mind, and it really helped me shape my project. It allowed me to think of my work as all of these little creatures I was sending out into the void and getting answers back from - quite a novel approach to creative thought.

shuann-parrish

Above all, I think the way she defines her text robots as explores that explore "whatever parts of language that people usually find inhospitable" are very inspiring to me. Personally I have never though of generative texts in this way. I agree that we often want to create robots that can closely assemble humans, not matter if it's a bot that plays chess or reconstructs languages. Thus, embracing the rawness and the awkwardness of the content that was created by a machine and more importantly, find meanings within it can in some sense open up possibilities to new ways of understanding languages, and more specifically, the holes that we had naturally avoided.

Another that stuck with me is the phrase that "gaps could have been there for a reason", as it might have indications of violence or other things that are harmful. I think this is an important point to make. When we make automated machines and let them out in the world, we often consider what they create are out of our control, and it is just what it is (ex. TayTweets). However, I totally agree with the speaker that we, as the creator of the bots, need to take on the responsibility to make sure that we are actively take percussions against those undesirable outputs.

chromsan-parrish

The part of the lecture that stuck with me the most was the discussion on the mapping of the word net hierarchy to the cortex of the brain. The fact that the brain has this topographic representation of words and concepts is quite incredible. There are other mappings in the brain too; visual and auditory information is represented in a sort of a hierarchy throughout the cortex. The fact that this also extends to language is even more impressive given that language is evolutionary newer, and therefore less ingrained in the structure of the brain. These findings suggest a sort of innate mapping of the abstract meanings of words that affects how they are perceived and processed. Very interesting bit of research presented there.