How I Use Natural Language Processing for Meme Research

Natural Language Processing, aka NLP, is a branch of artificial intelligence (AI) that uses computers to understand and interpret human language. Natural language processing is often used to gain insights into a large body of unstructured text data that is too voluminous for human brains to comprehend.

In my PhD project, Recipe for a Good Meme, I asked participants to give their written comprehension of 300 internet memes. This resulted in more than 8,000 observations that would take months to read and interpret manually. So, I’m using NLP to extract my participants’ comprehension of each meme and how their text data relates to the other data sources I collected from them.

Finding the Right Natural Language Processing Model

The first step was finding the right natural language processing model. My preferred coding language for machine learning is Python, and my data was stored in Google Drive. So, I limited the NLP models to ones that can be run in Google Colab Notebooks since it’s easy to access data from my drive and run Python code quickly.

At first, I wanted to use OpenAI’s GPT-3 language models. As ChatGPT is taking over the world with its impressive dialogue capabilities, I figured other products from OpenAI could suit my research needs. Unfortunately, I found the GPT-3 models to be almost useless in the free version. OpenAI without payment has a very low “rate limit” that would block my access to the API whenever I requested to process more than one full sentence at a time. I couldn’t process the written comprehension for one internet meme, let alone 300 memes.

Luckily, other APIs and models can do much more without subscription or payment. So, I decided to use the HuggingFace machine learning library to find the right Python-based model for me. Browsing through the models uploaded by the HuggingFace community, I found the Sentence Transformers library to be the most compatible with my natural language data.

So, I downloaded this library to my Colab notebook.

!pip install -U sentence-transformers

Loading the Data

Then, I mounted my Google Drive to import my data. I also imported the Pandas and NumPy libraries, which are essential for data analysis.

In this example, I am working with a subsample of 20 internet memes from the 300 memes in my dataset.

The written comprehension of participants is stored in the variable “written_comprehension”, while the meaning of each meme is stored in the variable “meme_meanings”.

from google.colab import drive
drive.mount('/content/drive')

import pandas as pd
import numpy as np

written_comprehension = pd.read_excel('/content/drive/MyDrive/RGM/RGM/Text Content Analysis /Computer_Written_Sample20.xlsx')

meme_meanings = pd.read_excel('/content/drive/MyDrive/RGM/RGM/Text Content Analysis /20MemesSourceMeanings.xlsx')

Computing Sentence Similarity

To check if participants understood the memes I presented, I calculated their sentence similarity scores to the meme meanings.

To do this, I used the all-MiniLM-L6-v2 model to extract word embeddings. Word embeddings are mathematical representations of sentences in the form of a vector. These vectors represent the meanings of the sentences in an array of numbers instead of strings of letters.

Using a sentence transformers API URL and my HuggingFace API Key, I defined a function called “query” that inputs a source sentence and other sentences for comparison and outputs their similarity scores.

import json
import requests

API_URL = "https://api-inference.huggingface.co/models/sentence-transformers/all-MiniLM-L6-v2"

headers = {"Authorization": f"Bearer {'APIKey'}"}

def query(payload):
    response = requests.post(API_URL, headers=headers, json=payload)
    return response.json()

Then, I called this function for my own data.

I used a for loop to compute the similarity scores of my participants’ comprehension with the meme meanings for the 20 internet memes in my dataset.

meme_similarity_scores = []
for meme in written_comprehension:
  
  #select the meme from columns in written_comprehension
  meme_column = written_comprehension[meme]

  #select only rows without NA values
  meme_column_no_nans = mini_column[~meme_column.isnull()].values

  #create empty list for similarity scores 
  similarity = []

  #apply query model 
  similarity = query(
    {
        "inputs": {
            "source_sentence": str(meme_meanings.loc[meme][0]),
            "sentences": list(meme_column_no_nans)
        }
    })

  
  #save into list 
  meme_similarity_scores.append(similarity)
    
meme_similarity_scores

Once I got the similarity score of each participant to the meme meanings, I began to interpret the results.

Natural Language Processing Example for Lisa Simpson Meme

Natural Language Processing Meme Example
Example of Lisa Simpson Meme

For this Lisa Simpson meme, the meme meaning I inputted was:

This is a Lisa Simpson meme where she gives a presentation, usually about a surprising but good cause. In this case, she is saying that if you don't cut the cake, the whole cake can be considered a piece of it. It makes you feel better about having too much cake. 

When I calculate the sentence similarity scores for this meme, we find that participants vary on how close they were to this exact sentence.

The closest participant with a similarity score of 0.75 said:

Lisa is showing the truth on the screen - that if you don't cut a cake out at all, it still counts as one piece technically


The next closest participant with a similarity score of 0.71 said:

If you don't cut the cake, it's only one piece, so you're only eating one piece. Kind of a cliche.

Meanwhile, the participant with the smallest similarity score of 0.06:

shit is from 2009

Correlating Sentence Similarity with Liking and Understanding

Then, I correlated the similarity scores with how much each participant reported liking and understanding the meme. Liking was measured using a 5-star scale, and participants reported their level of understanding from 1 to 10.

Natural Language processing result 1
r = 0.28
natural language processing result 2
r = 0.16

Looking at the graph of aesthetic liking and sentence similarity, we find that there is a variety of ratings for this Lisa Simpson meme. And we also find a weak but positive correlation between the liking ratings and sentence similarity.

On the other hand, for the understanding graph, we see that many people reported understanding this meme at a level 10. Therefore, there is a ceiling effect in my data. This results in almost no correlation between sentence similarity and reported understanding.

I calculated the correlations between liking and understanding for the rest of the 20 internet memes in our sample. I found a variety of correlations to sentence similarity across each meme:

           Meme  Overall Correlation  Understanding Correlation
0         Lisa4             0.280615                   0.160755
1   JackieChan3             0.032046                  -0.322861
2    Cardboard2            -0.051668                   0.064450
3      Awkward4            -0.052445                  -0.004027
4      Highway3            -0.143227                  -0.035147
5        Text43             0.222007                   0.281037
6         Text5             0.391238                   0.451323
7         Text3             0.072324                   0.106843
8        Text37             0.290415                   0.601990
9        Text30             0.339584                  -0.055030
10     Visual18             0.205619                   0.102117
11      Visual9             0.060527                   0.201191
12     Visual12             0.248009                   0.198133
13     Visual48             0.154011                   0.581456
14     Visual50             0.434082                   0.326877
15     Orphan51             0.071600                   0.170962
16     Orphan30             0.288417                   0.187825
17     Orphan62             0.047850                   0.235466
18     Orphan60             0.287968                   0.561831
19     Orphan75            -0.023420                   0.247262

Limitations with Sentence Similarity Approach

While sentence similarity is a simple NLP approach to text data, it suffers from a few limitations that affect the final results. The biggest limitation of the approach is the quality of the source sentence to base similarity on.

I, the researcher, was the one who provided the meanings of the memes as the source sentence. Therefore, if I did not describe the meme well, the similarity scores would be affected by my poor description.

In addition, participants may describe their understanding in different ways. Yet, a sentence similarity approach cannot distinguish if a participant misunderstood the meme or is describing their knowledge in a valid but different way.

But, for our large body of text data, sentence similarity is an excellent first step to understanding the contents of our survey. Although we asked participants to provide their understanding of the meme, we saw that some participants included some remarks about the quality of the meme or ignored our prompt entirely.

By calculating sentence similarity and correlating it with other data like reported liking and understanding, we can quickly gain insights that would take months to decipher manually.

Future Natural Language Processing Analyses

Sentence similarity is just one possible NLP analysis that can be done for meme research. I can also implement a sentiment analysis model to understand if participants were positive, negative, or neutral in their descriptions of the meme.

Like sentence similarity, we can also correlate these sentiment analysis scores with participants’ liking, understanding, and other emotional categories like amusement, boredom, and confusion.

Additionally, I can extract the word embeddings and use them as features in a machine-learning model that predicts how much participants liked each meme.

Conclusion

In this article, I showed how I use the HuggingFace Sentence Transformers library to implement natural language processing models on my internet meme dataset.

Using a subsample of 20 internet memes, I calculated the sentence similarity scores of participants’ understanding and the meme meanings. The results showed various correlations between sentence similarity, liking, and understanding.

I also accounted for the limitations of this approach while clarifying its usefulness for my project. I look forward to working more with natural language processing to analyze this dataset and other future projects.

2
1
What is your reaction?

In ,

Like this blog post? Share it with others who might like it too!

Subscribe

Sign up for my newsletter and stay up to date

*