Team 4: Sentiment Analysis of UK Tweets on Covid-19

Antonin Kadi , Mathieu François , Garance Faure
05-25-2020

Table of Contents


Pedagogical Material

As part of the Hacking Health Covid-19, the SKEMA Global Lab in AI provided to SKEMA’ students a fully developped data science environment to realize their project. See [here].

For this specific module, this team used these following courses:

Project Presentation

In this special period of quarantine that is so frantic, the “hacking health hackathon” comes just in time : we all have loads of information on the Covid 19 and have all talked and thought about the management of available information.

In our team we thought that natural language processing and sentiment analysis are one of the most impactful analysis so we started from here. After discussions and technical details -that we will detail afterwards- we closed our attention on the sentimental analysis of words under the hashtag #COVID19 on Twitter. After removing stop words and cleaning our data, we took the top words and divided them under positive and negative terms. We decided that this analysis would have more impact if the data collected was represented on the UK map. Thus helping realising the division of how the handling of the crisis is received in different areas.

Technical Process

In order to realize this project, we used as a basis a code already developed by the laboratory named “Mapping UK Tweets”. We add to this our own sentiment analysis and we adapted the result map to fit with. Here you can find the general structure of the R code and after the detail of the important parts.

The first part of the code is the retrieval of the data for the district. In this version of the code, we don’t use this data because our computers were not powerful enough to support interactive map display. We have left the code so you can use to adapt our solution to your version. After we created a token in order to have access to the twitter’s API. For this, we created an app from a Twitter developer account and it gave us keys in order to request the API as you can see below.


# load rtweet
library(rtweet)

# store api keys (these are fake example values; replace with your own keys)
api_key <- "xxxyourKeyxxxxx"
api_secret_key <- "xxxxxxx"
access_token <- "xxxxxxxxx"
access_token_secret <- "xxxxxxxx"

# authenticate via web browser
token <- create_token(
  app = "Covid19AnalysisWord",
  consumer_key = api_key,
  consumer_secret = api_secret_key,
  access_token = access_token,
  access_secret = access_token_secret)

With this access and the package “Rtweet”, we were able to download tweets with the parameters we wanted. For this, we looked for tweets with the hasthag “#Covid19”. We also added to our data the longitude and the latitude (thanks to the function lat_lng()) of the tweets which will be necessary for the next part of the code. And we saved this data in a CSV file in order to save them.


# search for 1000 tweets using the covid19 hashtag
covid19 <- search_tweets("#covid19", n=1000, include_rts = FALSE, retryonratelimit = TRUE)

We saved our tweets in a csv. We need to load them.


library(dplyr)
tweets.overall2 <- read.csv("./data/covidTweetData2.csv")
tweets.overall3 <- read.csv("./data/covidTweetData3.csv")
tweets.overall <- bind_rows(tweets.overall2, tweets.overall3)

Here we add some specifications to our data. First, we kept only the data from UK tweets and we kept the date of the publication of the tweet in order to make comparison of the sentiment analysis depending on the different events.


## KEEPING TWEETS OF UK
tweets.overall.LatLong <- filter(tweets.overall, lat >= 49.771686 & lat <= 60.862568)
tweets.overall.LatLong <- filter(tweets.overall.LatLong, lng >= -12.524414 & lng <= 1.785278)

## TWEETS MINING
tweets <- tweets.overall.LatLong

tweets.overall.LatLong$year <- substr(tweets.overall.LatLong$created_at, 0, 4)
tweets.overall.LatLong$month_day <- substr(tweets.overall.LatLong$created_at, 6, 10)

tweets.LatLong <- tibble(line = 1:nrow(tweets.overall.LatLong), 
                         year = tweets.overall.LatLong$year,
                         month_day = tweets.overall.LatLong$month_day,
                         latitude = tweets.overall.LatLong$lat, 
                         longitude = tweets.overall.LatLong$lng)

After that we cleaned the text from the tweets. We took off the retweet entities, punctuations and html links.


# Cleaning
text <- tweets$text

# remove retweet entities
text <- gsub("(RT|via)((?:\\b\\W*@\\w+)+)", "", text)
# remove at people
text <- gsub("@\\w+", "", text)
# remove punctuation
text <- gsub("[[:punct:]]", "", text)
# remove numbers
text <- gsub("[[:digit:]]", "", text)
# remove html links
text <- gsub("http\\w+", "", text)
# remove all pictwitter
text <- gsub("pictwitter\\w+ *", "", text)
# Remove chinese language
text <- iconv(text, "latin1", "ASCII", sub="")

# Tibble format
text_df <- tibble(line = 1:length(text), text = text)

library(tidytext)
# Tokenization 
tidy_tweets <- text_df %>% 
  unnest_tokens(word, text) %>%
  anti_join(stop_words, by = "word")

# Join tweets to longitude and latitude by line number
tidy_tweets_LatLong <- left_join(tidy_tweets, tweets.LatLong, by = "line")

The data are finally ready to our analysis. So we use the lexicon “Afinn” for our sentiment analysis. We have chosen this lexicon because thisone returns a sentiment score of each word while the lexicon “Bing” does not.


# Words that contribute to positive and negative sentiment
AFINN <- get_sentiments("afinn") 

afinn_word_LatLong <- tidy_tweets_LatLong %>%
  inner_join(AFINN, by = "word")

afinn_word_LatLong_Tot <- aggregate(value ~ line + word + year + month_day + latitude + longitude, afinn_word_LatLong, sum)

afinn_word_LatLong_Tot$sentiment <- ifelse(afinn_word_LatLong_Tot$value > 0, "positive", 
                                           ifelse(afinn_word_LatLong_Tot$value == 0, "neutral", "negative"))

afinn_word_LatLong_Tot_PN <- filter(afinn_word_LatLong_Tot, sentiment != "neutral")

With this sentiment analysis, we generated 4 plots which gives us a better way to see the results of this sentiment analysis than just the scores themselves. You will find these plots in the next part.

Semtiments Analysis - Covid19


library(ggplot2)
qplot(factor(sentiment), data=afinn_word_LatLong_Tot_PN, geom="bar", fill=factor(sentiment)) +
  xlab("Sentiment Categories") + 
  ylab("Frequency") + 
  ggtitle("Sentiments Analysis - Covid19") + 
  theme(legend.position = "none")

This chart shows the frequency of negative and positive words. Surprisingly the frequency of positive comments is higher than the negative one’s. Notice that here the frequency is number of non-unique words.

Sentiment Analysis Scores - Covid19


qplot(factor(value), data=afinn_word_LatLong_Tot_PN, geom="bar", fill=factor(value)) + 
  xlab("Sentiment Score") + 
  ylab("Frequency") + 
  ggtitle("Sentiments Analysis Scores - Covid19") + 
  theme(legend.position = "none")

This bar chart here represents the unique words count found in the #Covid19 on Twitter. Here the split of negative or positive comment isn’t done yet. We can see that support is the highest represented with more than 15 occurrences, second is safe and third is inspiration. From the second to the 6th, the occurrence of unique words are similar, this is due to the fact that we used a predefined set of data that isn’t representative of the result.

Most Used Words in # Covid19 tweets


#Plot word most used
afinn_word_LatLong_Tot %>% 
  count(word, sort = TRUE) %>% top_n(10) %>% 
  mutate(word = reorder(word,n)) %>% 
  ggplot(aes(x=word, y=n)) + 
  geom_col() + xlab(NULL) + 
  coord_flip() + 
  theme_classic() + 
  labs(x= "Count", y="Unique words", title = "Unique words counts found in #covid19 tweets")

This plot represents the frequency of words according to their sentiment score (from -4 to 6). The sentiment score helps to built up afterwards the positiveness or negativeness of a bag of words. Here we go deeper in the meaning of positive and negative : indeed, we can see that even thought the frequency of positive words is higher than the negative ones there “positiveness” is to be relativised: it is of 2 on a scale of 6.

Split of Positive and Negative Terms


# sentiment analysis (afinn)
afinn_covid = afinn_word_LatLong_Tot %>% 
  inner_join(get_sentiments("afinn")) %>% 
  count(word, sentiment, month_day, sort = TRUE) %>% 
  ungroup()

#PLOT sentiment analysis
afinn_covid %>% 
  group_by(sentiment) %>% 
  top_n(10) %>% 
  ungroup() %>% 
  mutate(word = reorder(word,n)) %>% 
  ggplot(aes(word, n, fill = sentiment)) + 
  geom_col(show.legend = FALSE) + 
  facet_wrap(~sentiment, scales="free_y") + 
  labs(title = "Tweets containing #covid19", y="Contribution to sentiment", x=NULL) + 
  coord_flip() + theme_bw()


#PLot sentiment analysis per day
afinn_covid %>% 
  filter(month_day == "04-16") %>% 
  group_by(sentiment) %>% 
  top_n(10) %>% 
  ungroup() %>% mutate(word = reorder(word,n)) %>% 
  ggplot(aes(word, n, fill = sentiment)) + 
  geom_col(show.legend = FALSE) + 
  facet_wrap(~sentiment, scales="free_y") + 
  labs(title = "Tweets from 16/04 containing #covid19", y="Contribution to sentiment", x=NULL) + 
  coord_flip() + 
  theme_bw()


afinn_covid %>% 
  filter(month_day == "04-17") %>% 
  group_by(sentiment) %>% 
  top_n(10) %>% 
  ungroup() %>% mutate(word = reorder(word,n)) %>% 
  ggplot(aes(word, n, fill = sentiment)) + 
  geom_col(show.legend = FALSE) + 
  facet_wrap(~sentiment, scales="free_y") + 
  labs(title = "Tweets from 17/04 containing #covid19", y="Contribution to sentiment", x=NULL) + 
  coord_flip() + theme_bw()

Theses plots represent the split of positive and negative terms on the 16th and 17th under the #covid19 (as explained beneath, this is not ideal and depends on how data is collected). On the 17th, we see that the negative word criss is the one appearing the most. After that in the negative rank appears vulnerable and strange. Showing that people mostly feel lost and disoriented. Occurring as often but in positive terms we see support, proud and funny. This could lead us to think that some people can still feel positive about the situation and find a sense of purpose (suport, proud).

Map of the positive and negative UK tweets


library(rgdal)

district <- readOGR(dsn = "./shapefile/", 
                    layer = 'Local_Authority_Districts_December_2017_Full_Clipped_Boundaries_in_United_Kingdom_WGS84')

OGR data source with driver: ESRI Shapefile 
Source: "/home/marinel/portfolio/hacking-health/_posts/2020-05-21-team4/shapefile", layer: "Local_Authority_Districts_December_2017_Full_Clipped_Boundaries_in_United_Kingdom_WGS84"
with 391 features
It has 10 fields

district@data$lad17nm <- gsub(", City of", "", district@data$lad17nm)
district@data$lad17nm <- gsub(", County of", "", district@data$lad17nm)
district@data$lad17nm <- gsub("-", " ", district@data$lad17nm)
district@data$lad17nm <- gsub("'", "", district@data$lad17nm)
district@data$lad17nm<- gsub("St ", "St. ", district@data$lad17nm)

library(tools)
district@data$lad17nm <- toTitleCase(district@data$lad17nm)

library(sf)
map <- read_sf("./shapefile/Local_Authority_Districts_December_2017_Full_Clipped_Boundaries_in_United_Kingdom_WGS84.shp")

map$lad17nm <- gsub(", City of", "", map$lad17nm)
map$lad17nm <- gsub(", County of", "", map$lad17nm)
map$lad17nm <- gsub("-", " ", map$lad17nm)
map$lad17nm <- gsub("'", "", map$lad17nm)
map$lad17nm<- gsub("St ", "St. ", map$lad17nm)

map$lad17nm <- toTitleCase(map$lad17nm)

pnts <- afinn_word_LatLong_Tot_PN

pnts_sf <- st_as_sf(pnts, coords = c('longitude', 'latitude'), crs = st_crs(map))

pnts <- pnts_sf %>% mutate(
  intersection = as.integer(st_intersects(geometry, map)), 
  lad17nm = if_else(is.na(intersection), '', map$lad17nm[intersection])
) 

pnts <- na.omit(pnts)

lll <- select(afinn_word_LatLong_Tot_PN, line, year, longitude, latitude)

pnts <- left_join(pnts, lll, by = c("line", "year"))

pnts$year <- as.numeric(pnts$year)

district@data <- left_join(district@data, pnts, by = "lad17nm")

district@data$intersection <- NULL
district@data$geometry <- NULL

library(leaflet)
bins <- c(20000, 30000, 40000, 50000, 60000, 70000, 100000, 150000, 200000)
pal <- colorBin("YlOrRd", domain = district@data$income_tot_mean, bins = bins)

# labels <- sprintf(
#   "<strong>%s</strong><br/>%g £",
#   district@data$lad17nm, district@data$income_tot_mean
# ) %>% lapply(htmltools::HTML)

palTweets <- colorFactor(c("red", "limegreen"), domain = c("positive", "negative"))

biggestSentiment <- afinn_word_LatLong_Tot

j = 1

countTmp = (count(afinn_word_LatLong_Tot_PN)$n - 1)

for( i in 1:countTmp){
  if(biggestSentiment$longitude[i] == biggestSentiment$longitude[i+1])
  {
    j = j+1
  }
}

k = 0

for( i in 1:j){
    if(biggestSentiment$longitude[i-k] == biggestSentiment$longitude[i+1-k])
    {
        if(abs(biggestSentiment$value[i-k]) > abs(biggestSentiment$value[i+1-k]))
        {
            biggestSentiment <- biggestSentiment[-(i+1-k),]
        }
        else
        {
            biggestSentiment <- biggestSentiment[-(i-k),]
        }
      k = k +1
    }
}

library(leaflet.extras)
leaflet(data = biggestSentiment) %>%
  setView(-3.3269944491454773, 54.08734846094229, 4) %>%
  addProviderTiles(providers$CartoDB.Positron, 
                   options = providerTileOptions(minZoom = 5, maxZoom = 10)) %>%
  addFullscreenControl() %>%
  addLabelOnlyMarkers(lng = ~longitude, lat = ~latitude, label =biggestSentiment$word, 
                      labelOptions = labelOptions(noHide = T, direction = 'top', textOnly = T)) %>%
   addCircleMarkers(lng = ~longitude, lat = ~latitude,
                    radius = abs(afinn_word_LatLong_Tot_PN$value)*2,
                    color = ~palTweets(sentiment),
                    stroke = FALSE, 
                    fillOpacity = 1
  )

Here you can see our final result : the map of the United Kingdom with for each appearance of a wording that is considered as positive : a green dot and for each wording that is considered negative, a red dot. The size of the dot varies according to the number of occurrences.

tl;dr


library(rgdal)
library(leaflet)
library(leaflet.extras)
library(RColorBrewer)
library(tidyverse)
library(tools)
library(reshape2)
library(ggplot2)
library(ggridges)
library(lubridate)
library(rtweet)
library(maps)
library(quanteda)
library(wordcloud)
library(tidytext)
library(sf)
library(dplyr)

## LOADING TWEETS
tweets.overall2 <- read.csv("./data/covidTweetData2.csv")
tweets.overall3 <- read.csv("./data/covidTweetData3.csv")
tweets.overall <- bind_rows(tweets.overall2, tweets.overall3)

## KEEPING TWEETS OF UK
tweets.overall.LatLong <- filter(tweets.overall, lat >= 49.771686 & lat <= 60.862568)
tweets.overall.LatLong <- filter(tweets.overall.LatLong, lng >= -12.524414 & lng <= 1.785278)

## TWEETS MINING
tweets <- tweets.overall.LatLong

tweets.overall.LatLong$year <- substr(tweets.overall.LatLong$created_at, 0, 4)
tweets.overall.LatLong$month_day <- substr(tweets.overall.LatLong$created_at, 6, 10)

tweets.LatLong <- tibble(line = 1:nrow(tweets.overall.LatLong), 
                         year = tweets.overall.LatLong$year,
                         month_day = tweets.overall.LatLong$month_day,
                         latitude = tweets.overall.LatLong$lat, 
                         longitude = tweets.overall.LatLong$lng)

# Cleaning
text <- tweets$text

# remove retweet entities
text <- gsub("(RT|via)((?:\\b\\W*@\\w+)+)", "", text)
# remove at people
text <- gsub("@\\w+", "", text)
# remove punctuation
text <- gsub("[[:punct:]]", "", text)
# remove numbers
text <- gsub("[[:digit:]]", "", text)
# remove html links
text <- gsub("http\\w+", "", text)
# remove all pictwitter
text <- gsub("pictwitter\\w+ *", "", text)
# Remove chinese language
text <- iconv(text, "latin1", "ASCII", sub="")

# Tibble format
text_df <- tibble(line = 1:length(text), text = text)

# Tokenization 
tidy_tweets <- text_df %>% 
  unnest_tokens(word, text) %>%
  anti_join(stop_words, by = "word")

# Join tweets to longitude and latitude by line number
tidy_tweets_LatLong <- left_join(tidy_tweets, tweets.LatLong, by = "line")

# Words that contribute to positive and negative sentiment
AFINN <- get_sentiments("afinn") 

afinn_word_LatLong <- tidy_tweets_LatLong %>%
  inner_join(AFINN, by = "word")

afinn_word_LatLong_Tot <- aggregate(value ~ line + word + year + month_day + latitude + longitude, afinn_word_LatLong, sum)

afinn_word_LatLong_Tot$sentiment <- ifelse(afinn_word_LatLong_Tot$value > 0, "positive", 
                                           ifelse(afinn_word_LatLong_Tot$value == 0, "neutral", "negative"))

afinn_word_LatLong_Tot_PN <- filter(afinn_word_LatLong_Tot, sentiment != "neutral")


qplot(factor(sentiment), data=afinn_word_LatLong_Tot_PN, geom="bar", fill=factor(sentiment)) +
  xlab("Sentiment Categories") + 
  ylab("Frequency") + 
  ggtitle("Sentiments Analysis - Covid19") + 
  theme(legend.position = "none")

qplot(factor(value), data=afinn_word_LatLong_Tot_PN, geom="bar", fill=factor(value)) + 
  xlab("Sentiment Score") + 
  ylab("Frequency") + 
  ggtitle("Sentiments Analysis Scores - Covid19") + 
  theme(legend.position = "none")

#Plot word most used
afinn_word_LatLong_Tot %>% 
  count(word, sort = TRUE) %>% top_n(10) %>% 
  mutate(word = reorder(word,n)) %>% 
  ggplot(aes(x=word, y=n)) + 
  geom_col() + xlab(NULL) + 
  coord_flip() + 
  theme_classic() + 
  labs(x= "Count", y="Unique words", title = "Unique words counts found in #covid19 tweets")

# sentiment analysis (afinn)
afinn_covid = afinn_word_LatLong_Tot %>% 
  inner_join(get_sentiments("afinn")) %>% 
  count(word, sentiment, month_day, sort = TRUE) %>% 
  ungroup()

#PLOT sentiment analysis
afinn_covid %>% 
  group_by(sentiment) %>% 
  top_n(10) %>% 
  ungroup() %>% 
  mutate(word = reorder(word,n)) %>% 
  ggplot(aes(word, n, fill = sentiment)) + 
  geom_col(show.legend = FALSE) + 
  facet_wrap(~sentiment, scales="free_y") + 
  labs(title = "Tweets containing #covid19", y="Contribution to sentiment", x=NULL) + 
  coord_flip() + theme_bw()

#PLot sentiment analysis per day
afinn_covid %>% 
  filter(month_day == "04-16") %>% 
  group_by(sentiment) %>% 
  top_n(10) %>% 
  ungroup() %>% mutate(word = reorder(word,n)) %>% 
  ggplot(aes(word, n, fill = sentiment)) + 
  geom_col(show.legend = FALSE) + 
  facet_wrap(~sentiment, scales="free_y") + 
  labs(title = "Tweets from 16/04 containing #covid19", y="Contribution to sentiment", x=NULL) + 
  coord_flip() + 
  theme_bw()

afinn_covid %>% 
  filter(month_day == "04-17") %>% 
  group_by(sentiment) %>% 
  top_n(10) %>% 
  ungroup() %>% mutate(word = reorder(word,n)) %>% 
  ggplot(aes(word, n, fill = sentiment)) + 
  geom_col(show.legend = FALSE) + 
  facet_wrap(~sentiment, scales="free_y") + 
  labs(title = "Tweets from 17/04 containing #covid19", y="Contribution to sentiment", x=NULL) + 
  coord_flip() + theme_bw()


## SHAPEFILE MAP DISTRICT UK
district <- readOGR(dsn = "./shapefile/", 
                    layer = 'Local_Authority_Districts_December_2017_Full_Clipped_Boundaries_in_United_Kingdom_WGS84')

district@data$lad17nm <- gsub(", City of", "", district@data$lad17nm)
district@data$lad17nm <- gsub(", County of", "", district@data$lad17nm)
district@data$lad17nm <- gsub("-", " ", district@data$lad17nm)
district@data$lad17nm <- gsub("'", "", district@data$lad17nm)
district@data$lad17nm<- gsub("St ", "St. ", district@data$lad17nm)

district@data$lad17nm <- toTitleCase(district@data$lad17nm)
map <- read_sf("./shapefile/Local_Authority_Districts_December_2017_Full_Clipped_Boundaries_in_United_Kingdom_WGS84.shp")

map$lad17nm <- gsub(", City of", "", map$lad17nm)
map$lad17nm <- gsub(", County of", "", map$lad17nm)
map$lad17nm <- gsub("-", " ", map$lad17nm)
map$lad17nm <- gsub("'", "", map$lad17nm)
map$lad17nm<- gsub("St ", "St. ", map$lad17nm)

map$lad17nm <- toTitleCase(map$lad17nm)

pnts <- afinn_word_LatLong_Tot_PN

pnts_sf <- st_as_sf(pnts, coords = c('longitude', 'latitude'), crs = st_crs(map))

pnts <- pnts_sf %>% mutate(
  intersection = as.integer(st_intersects(geometry, map)), 
  lad17nm = if_else(is.na(intersection), '', map$lad17nm[intersection])
) 

pnts <- na.omit(pnts)

lll <- select(afinn_word_LatLong_Tot_PN, line, year, longitude, latitude)

pnts <- left_join(pnts, lll, by = c("line", "year"))

pnts$year <- as.numeric(pnts$year)

tweets_sentiment_income_pop_latlong <- left_join(pnts, district@data, by = "lad17nm")

tweets_sentiment_income_pop_latlong$intersection <- NULL
tweets_sentiment_income_pop_latlong$geometry <- NULL

districtID <- select(district@data, objectid, lad17cd, lad17nm)

tweets_sentiment_income_pop_latlong <- left_join(tweets_sentiment_income_pop_latlong, districtID, by = "lad17nm")

tweets_sentiment_income_pop_latlong_final <- select(tweets_sentiment_income_pop_latlong, 
                                                    line, sentiment, value, longitude, 
                                                    latitude, year, lad17nm, lad17cd, 
                                                    objectid, population, income_tot_mean)

names(tweets_sentiment_income_pop_latlong_final)[names(tweets_sentiment_income_pop_latlong_final) == "objectid"] <- "lad17id"

write.csv(tweets_sentiment_income_pop_latlong_final, "tweets_sentiment_income_pop_latlong_final.csv")

## MAP DISTRICT UK
district@data <- left_join(district@data, district_data, by = "lad17nm")

bins <- c(20000, 30000, 40000, 50000, 60000, 70000, 100000, 150000, 200000)
pal <- colorBin("YlOrRd", domain = district@data$income_tot_mean, bins = bins)

labels <- sprintf(
  "<strong>%s</strong><br/>%g £",
  district@data$lad17nm, district@data$income_tot_mean
) %>% lapply(htmltools::HTML)

palTweets <- colorFactor(c("red", "limegreen"), domain = c("positive", "negative"))

biggestSentiment <- afinn_word_LatLong_Tot

j = 1

countTmp = (count(afinn_word_LatLong_Tot_PN)$n - 1)

for( i in 1:countTmp)
{
  if(biggestSentiment$longitude[i] == biggestSentiment$longitude[i+1])
  {
    j = j+1
  }
}

k = 0

for( i in 1:j)
{
    if(biggestSentiment$longitude[i-k] == biggestSentiment$longitude[i+1-k])
    {
        if(abs(biggestSentiment$value[i-k]) > abs(biggestSentiment$value[i+1-k]))
        {
            biggestSentiment <- biggestSentiment[-(i+1-k),]
        }
        else
        {
            biggestSentiment <- biggestSentiment[-(i-k),]
        }
      k = k +1
    }
}


leaflet(data = biggestSentiment) %>%
  setView(-0.118092, 51.509865, 4) %>%
  addProviderTiles(providers$CartoDB.Positron) %>%
  addFullscreenControl() %>%
  addLabelOnlyMarkers(lng = ~longitude, lat = ~latitude, label =biggestSentiment$word, labelOptions = labelOptions(noHide = T, direction = 'top', textOnly = T)) %>%
   addCircleMarkers(lng = ~longitude, lat = ~latitude,
                    radius = abs(afinn_word_LatLong_Tot_PN$value)*2,
                    color = ~palTweets(sentiment),
                    stroke = FALSE, 
                    fillOpacity = 1
  )

To go further with our pedagogical platform

Reuse

Text and figures are licensed under Creative Commons Attribution CC BY 4.0. The figures that have been reused from other sources don't fall under this license and can be recognized by a note in their caption: "Figure from ...".