Skip to Main Content

University Library

LibGuides

Engineering & Innovation

 

About R

Good for:

  • Statistics and data analytics
  • Plotting data

Why R:

  • It’s freely available
  • Many additional libraries available
  • Used throughout academia and industry for statistical analyses

Recommended setup(s):

 

Best resource to learn quickly:

Introduction to R for Data Insights on LijnkedIn Learning (access through SSO)

  • A nice introduction to R, with context and reasons for both why and how to start using R to ask questions of datasets and extract insights.  Login through SSO to get access to the exercise files.
  • Average video length: 5+ min
  • Total duration: 4h18m

Using Installed Libraries

How to load/use a library in R that isn't loaded by default

 
# to load installed libraries, use the library() function and pass the name of the package.
# you can also use the require() function
library(tidyverse)
library(ggplot2)

# to see details about a package, use the library(help = "pkg")
# library(help = "ggplot2")

# The difference between library() and require() is that require can be used inside another function and it
# returns a value if it was loaded appropriately.  It will throw warnings, where library() throws an error.
# To see more info about a function, use the help() function like so:

# help("require")

# You can also use a shortcut to print help info using the '?' like this:
?library()

Requireing/Installing Packages

How to load or install a package if you need it

In [8]:
# To test if we can load a package, use the require() function.
# if require() returns False, we didn't load the package,
# and then we know we need to install it.
if(!require("pacman"))
{
    install.packages("pacman")
}
 
Loading required package: pacman

Warning message in library(package, lib.loc = lib.loc, character.only = TRUE, logical.return = TRUE, :
"there is no package called 'pacman'"
also installing the dependency 'remotes'


 
package 'remotes' successfully unpacked and MD5 sums checked
package 'pacman' successfully unpacked and MD5 sums checked

The downloaded binary packages are in
	C:\Users\bethke2\AppData\Local\Temp\Rtmpch8gMJ\downloaded_packages

Using Multiple R Scripts

How to use functions from other files

 
# To access another function in another .R files, use the source() function.
# For example, if you had a script saved [in the same folder] as other_script.R, 
# which contained a function() other_function(a,b) which returns a + (b/2), 
# you could call it like this:
source("other_script.r")
res <- other_function(1,2)
res

# Note you only need to call 'source()' once, then you can use the function multiple times.
res2 <- other_function(40,50)
res2
 
2
 

65

Useful Test Datasets

Some useful healthcare sample datasets to test methods

 
# first we load pacman package manager
if(!require("pacman"))
{
    install.packages("pacman")
}

# First load the datasets package,
p_load(datasets)

# then we can use p_data() function to get details about builtin packages 
p_data(datasets)

# We can now load some useful datasets that contain medically relevant datasets
p_load(carData)
p_load(caret)

print("carData, Companion to Applied Regression")
p_data(carData)
print("caret, Classification and Regression Training")
p_data(caret)
 
A wide_table: 104 × 2
Data Description
<chr> <chr>
AirPassengers Monthly Airline Passenger Numbers 1949-1960
BJsales Sales Data with Leading Indicator
BJsales.lead (BJsales) Sales Data with Leading Indicator
BOD Biochemical Oxygen Demand
CO2 Carbon Dioxide Uptake in Grass Plants
ChickWeight Weight versus age of chicks on different diets
DNase Elisa assay of DNase
EuStockMarkets Daily Closing Prices of Major European Stock Indices, 1991-1998
Formaldehyde Determination of Formaldehyde
HairEyeColor Hair and Eye Color of Statistics Students
Harman23.cor Harman Example 2.3
Harman74.cor Harman Example 7.4
Indometh Pharmacokinetics of Indomethacin
InsectSprays Effectiveness of Insect Sprays
JohnsonJohnson Quarterly Earnings per Johnson & Johnson Share
LakeHuron Level of Lake Huron 1875-1972
LifeCycleSavings Intercountry Life-Cycle Savings Data
Loblolly Growth of Loblolly pine trees
Nile Flow of the River Nile
Orange Growth of Orange Trees
OrchardSprays Potency of Orchard Sprays
PlantGrowth Results from an Experiment on Plant Growth
Puromycin Reaction Velocity of an Enzymatic Reaction
Seatbelts Road Casualties in Great Britain 1969-84
Theoph Pharmacokinetics of Theophylline
Titanic Survival of passengers on the Titanic
ToothGrowth The Effect of Vitamin C on Tooth Growth in Guinea Pigs
UCBAdmissions Student Admissions at UC Berkeley
UKDriverDeaths Road Casualties in Great Britain 1969-84
UKgas UK Quarterly Gas Consumption
npk Classical N, P, K Factorial Experiment
occupationalStatus Occupational Status of Fathers and their Sons
precip Annual Precipitation in US Cities
presidents Quarterly Approval Ratings of US Presidents
pressure Vapor Pressure of Mercury as a Function of Temperature
quakes Locations of Earthquakes off Fiji
randu Random Numbers from Congruential Generator RANDU
rivers Lengths of Major North American Rivers
rock Measurements on Petroleum Rock Samples
sleep Student's Sleep Data
stack.loss (stackloss) Brownlee's Stack Loss Plant Data
stack.x (stackloss) Brownlee's Stack Loss Plant Data
stackloss Brownlee's Stack Loss Plant Data
state.abb (state) US State Facts and Figures
state.area (state) US State Facts and Figures
state.center (state) US State Facts and Figures
state.division (state) US State Facts and Figures
state.name (state) US State Facts and Figures
state.region (state) US State Facts and Figures
state.x77 (state) US State Facts and Figures
sunspot.month Monthly Sunspot Data, from 1749 to "Present"
sunspot.year Yearly Sunspot Data, 1700-1988
sunspots Monthly Sunspot Numbers, 1749-1983
swiss Swiss Fertility and Socioeconomic Indicators (1888) Data
treering Yearly Treering Data, -6000-1979
trees Diameter, Height and Volume for Black Cherry Trees
uspop Populations Recorded by the US Census
volcano Topographic Information on Auckland's Maunga Whau Volcano
warpbreaks The Number of Breaks in Yarn during Weaving
women Average Heights and Weights for American Women
 
[1] "carData, Companion to Applied Regression"
 
A wide_table: 63 × 2
Data Description
<chr> <chr>
AMSsurvey American Math Society Survey Data
Adler Experimenter Expectations
Angell Moral Integration of American Cities
Anscombe U. S. State Public-School Expenditures
Arrests Arrests for Marijuana Possession
BEPS British Election Panel Study
Baumann Methods of Teaching Reading Comprehension
Bfox Canadian Women's Labour-Force Participation
Blackmore Exercise Histories of Eating-Disordered and Control Subjects
Burt Fraudulent Data on IQs of Twins Raised Apart
CES11 2011 Canadian National Election Study, With Attitude Toward Abortion
CanPop Canadian Population Data
Chile Voting Intentions in the 1988 Chilean Plebiscite
Chirot The 1907 Romanian Peasant Rebellion
Cowles Cowles and Davis's Data on Volunteering
Davis Self-Reports of Height and Weight
DavisThin Davis's Data on Drive for Thinness
Depredations Minnesota Wolf Depredation Data
Duncan Duncan's Occupational Prestige Data
Ericksen The 1980 U.S. Census Undercount
Florida Florida County Voting
Freedman Crowding and Crime in U. S. Metropolitan Areas
Friendly Format Effects on Recall
GSSvocab Data from the General Social Survey (GSS) from the National Opinion Research Center of the University of Chicago.
Ginzberg Data on Depression
Greene Refugee Appeals
Guyer Anonymity and Cooperation
Hartnagel Canadian Crime-Rates Time Series
Highway1 Highway Accidents
KosteckiDillon Treatment of Migraine Headaches
Migration Canadian Interprovincial Migration Data
Moore Status, Authoritarianism, and Conformity
MplsDemo Minneapolis Demographic Data 2015, by Neighborhood
MplsStops Minneapolis Police Department 2017 Stop Data
Mroz U.S. Women's Labor-Force Participation
OBrienKaiser O'Brien and Kaiser's Repeated-Measures Data
OBrienKaiserLong O'Brien and Kaiser's Repeated-Measures Data in "Long" Format
Ornstein Interlocking Directorates Among Major Canadian Firms
Pottery Chemical Composition of Pottery
Prestige Prestige of Canadian Occupations
Quartet Four Regression Datasets
Robey Fertility and Contraception
Rossi Rossi et al.'s Criminal Recidivism Data
SLID Survey of Labour and Income Dynamics
Sahlins Agricultural Production in Mazulu Village
Salaries Salaries for Professors
Soils Soil Compositions of Physical and Chemical Characteristics
States Education and Related Statistics for the U.S. States
TitanicSurvival Survival of Passengers on the Titanic
Transact Transaction data
UN National Statistics from the United Nations, Mostly From 2009-2011
UN98 United Nations Social Indicators Data 1998]
USPop Population of the United States
Vocab Vocabulary and Education
WVS World Values Surveys
WeightLoss Weight Loss Data
Wells Well Switching in Bangladesh
Womenlf Canadian Women's Labour-Force Participation
Wong Post-Coma Recovery of IQ
Wool Wool data
 
[1] "caret, Classification and Regression Training"
 
A wide_table: 19 × 2
Data Description
<chr> <chr>
GermanCredit German Credit Data
Sacramento Sacramento CA Home Prices
absorp (tecator) Fat, Water and Protein Content of Meat Samples
bbbDescr (BloodBrain) Blood Brain Barrier Data
cars Kelly Blue Book resale data for 2005 model year GM cars
cox2Class (cox2) COX-2 Activity Data
cox2Descr (cox2) COX-2 Activity Data
cox2IC50 (cox2) COX-2 Activity Data
dhfr Dihydrofolate Reductase Inhibitors Data
endpoints (tecator) Fat, Water and Protein Content of Meat Samples
fattyAcids (oil) Fatty acid composition of commercial oils
logBBB (BloodBrain) Blood Brain Barrier Data
mdrrClass (mdrr) Multidrug Resistance Reversal (MDRR) Agent Data
mdrrDescr (mdrr) Multidrug Resistance Reversal (MDRR) Agent Data
oilType (oil) Fatty acid composition of commercial oils
potteryClass (pottery) Pottery from Pre-Classical Sites in Italy
scat Morphometric Data on Scat
scat_orig (scat) Morphometric Data on Scat
segmentationData Cell Body Segmentation

Importing Data From Excel Spreadsheet

How to open Excel datasets with R. Nice for double checking what you're doing visually, and for manipulating data someone else has already prepared.

In [18]:
# first, load the R Input/Output package
if(!require("rio"))
{
    install.packages("rio")
}


# read in an xlsx Excel file
df1 <- import("data/ExampleDataClean.xlsx") %>% as_tibble()
print(df1)

# read an xlsx Excel file and get specific fields
df2 <- import("data/ExampleDataClean.xlsx") %>%
    as_tibble() %>%
    select(Field1, 
        Field3:Field6) %>%
print(df2)
 
# A tibble: 19 x 12
   Date                Location Field1 Field2 Field3  Field4 Field5  Field6
   <dttm>              <chr>     <dbl>  <dbl>  <dbl>   <dbl>  <dbl>   <dbl>
 1 2020-05-06 00:00:00 US       0.0871  0.466 0.257  0.851   0.0247 0.171  
 2 2020-05-07 00:00:00 US       0.374   0.596 0.461  0.456   0.473  0.0374 
 3 2020-05-09 00:00:00 MX       0.951   0.787 0.376  0.652   0.224  0.364  
 4 2020-05-11 00:00:00 CA       0.957   0.138 0.949  0.252   0.422  0.782  
 5 2020-05-13 00:00:00 US       0.275   0.760 0.624  0.0968  0.266  0.175  
 6 2020-05-15 00:00:00 CA       0.580   0.598 0.354  0.926   0.0612 0.302  
 7 2020-05-17 00:00:00 US       0.935   0.918 0.662  0.260   0.0402 0.0869 
 8 2020-05-19 00:00:00 MX       0.783   0.316 0.387  0.0216  0.391  0.835  
 9 2020-05-21 00:00:00 MX       0.283   0.928 0.543  0.318   0.896  0.958  
10 2020-05-23 00:00:00 US       0.357   0.343 0.762  0.0973  0.628  0.125  
11 2020-05-25 00:00:00 CA       0.0714  0.108 0.788  0.413   0.877  0.380  
12 2020-05-27 00:00:00 CA       0.0165  0.364 0.303  0.655   0.702  0.658  
13 2020-05-29 00:00:00 CA       0.844   0.348 0.430  0.789   0.326  0.398  
14 2020-05-31 00:00:00 US       0.621   0.226 0.0963 0.700   0.608  0.00775
15 2020-06-02 00:00:00 MX       0.0751  0.893 0.448  0.00158 0.0923 0.171  
16 2020-06-04 00:00:00 US       0.480   0.586 0.751  0.731   0.321  0.210  
17 2020-06-06 00:00:00 US       0.227   0.189 0.463  0.728   0.220  0.301  
18 2020-06-08 00:00:00 US       0.401   0.951 0.434  0.759   0.387  0.188  
19 2020-06-10 00:00:00 US       0.447   0.553 0.703  0.0753  0.572  0.774  
# ... with 4 more variables: Field7 <dbl>, Field8 <dbl>, Field9 <dbl>,
#   Field10 <dbl>
# A tibble: 19 x 5
   Field1 Field3  Field4 Field5  Field6
    <dbl>  <dbl>   <dbl>  <dbl>   <dbl>
 1 0.0871 0.257  0.851   0.0247 0.171  
 2 0.374  0.461  0.456   0.473  0.0374 
 3 0.951  0.376  0.652   0.224  0.364  
 4 0.957  0.949  0.252   0.422  0.782  
 5 0.275  0.624  0.0968  0.266  0.175  
 6 0.580  0.354  0.926   0.0612 0.302  
 7 0.935  0.662  0.260   0.0402 0.0869 
 8 0.783  0.387  0.0216  0.391  0.835  
 9 0.283  0.543  0.318   0.896  0.958  
10 0.357  0.762  0.0973  0.628  0.125  
11 0.0714 0.788  0.413   0.877  0.380  
12 0.0165 0.303  0.655   0.702  0.658  
13 0.844  0.430  0.789   0.326  0.398  
14 0.621  0.0963 0.700   0.608  0.00775
15 0.0751 0.448  0.00158 0.0923 0.171  
16 0.480  0.751  0.731   0.321  0.210  
17 0.227  0.463  0.728   0.220  0.301  
18 0.401  0.434  0.759   0.387  0.188  
19 0.447  0.703  0.0753  0.572  0.774  

Importing Data From CSV

csv, or comma separated files, are a very common vehicle for both clean and raw data.

In [38]:
# first, load the R Input/Output package
if(!require("rio"))
{
    install.packages("rio")
}

# now we read in the csv data
df1 <- import("data/ExampleDataClean2.csv")
print(colnames(df1))

# we can use slicing notation as [rows, columns] of a matrix-like object to 
# select certain data from the whole.
df1[1:5,1:4]

# if we don't want the slice to fail if we request more rows or columns than we have, we can use 
# the head() or tail() function to get the first couple or last couple rows.  This can be useful 
# when previewing data to see if it loaded correctly.

# first five rows, all columns
head(df1, 5)
# last 3 rows, all columns
tail(df1, 3)

# we often need to check number of rows or columns to slice or subselect data.
# do this with nrow() or ncol().  Can also check with dim() to return both
rs <- nrow(df1)
cs <- ncol(df1)

# To make unpacking multiple return statements easier, use the zeallot package.
if(!require("zeallot"))
{
    install.packages("zeallot")
}
# we can now use the magic pipe, %<-% to unpack values to a collection, c()
c(rows, cols) %<-% dim(df1)

sprintf(fmt = "rows = %d, cols = %d, dim = %dx%d", rs, cs, rows, cols)
 
[1] "Timestamp" "Val1"      "Val2"      "Val3"     
 
A data.frame: 5 × 4
  Timestamp Val1 Val2 Val3
  <chr> <chr> <dbl> <dbl>
1 15:50:40.94 7741335 pig attached -0.656250 6.699250
2 15:50:41.08 7741335 pig attached -0.562500 6.770688
3 15:50:41.23 7741335 pig attached -1.265625 7.151688
4 15:50:41.38 7741335 pig attached -0.656250 10.191750
5 15:50:41.51 7741335 pig attached 1.062500 12.961937
 
A data.frame: 5 × 4
  Timestamp Val1 Val2 Val3
  <chr> <chr> <dbl> <dbl>
1 15:50:40.94 7741335 pig attached -0.656250 6.699250
2 15:50:41.08 7741335 pig attached -0.562500 6.770688
3 15:50:41.23 7741335 pig attached -1.265625 7.151688
4 15:50:41.38 7741335 pig attached -0.656250 10.191750
5 15:50:41.51 7741335 pig attached 1.062500 12.961937
 
A data.frame: 3 × 4
  Timestamp Val1 Val2 Val3
  <chr> <chr> <dbl> <dbl>
401 15:51:39.27 7741335 pig attached -1.343750 6.770688
402 15:51:39.41 7741335 pig attached 1.015625 6.667500
403 15:51:39.56 7741335 pig attached -1.000000 6.611937
 
Loading required package: zeallot

 

'rows = 403, cols = 4, dim = 403x4'

Calculating Statistics by Index

How to analyze parts of the dataset

In [45]:
# the apply function is useful to use a function on many elements of a matrix or dataframe
# without looping over each row/column.

# first, load the R Input/Output package
if(!require("rio"))
{
    install.packages("rio")
}

# To make unpacking multiple return statements easier, use the zeallot package.
if(!require("zeallot"))
{
    install.packages("zeallot")
}

# now we read in the csv data to dataframe
df1 <- import("data/ExampleDataClean2.csv")
df1

# Now if we wanted the max of each column, we can use apply():
maxes <- apply(df1, 2, max)
print('max of each column:')
maxes
# for columns, change the second parameter from '2' to '1'.  For all elements, c(1,2)


# if you want a list or vector result, use lapply() or sapply() respectively on just a part of the dataframe/matrix.
# example:
max_row_4 <- lapply(df1[4,], max)
print('max of row 4:')
max_row_4

min_column_3 <- sapply(df1[3:4,3], min)
print('min of rows 3 and 4, column 3:')
min_column_3


# You can also define your own function to apply.  Just define it, then reference it in apply.
inv_sq <- function(x){   
    # takes the square of the inverse of non-zero elements.
    ix <- x[!is.infinite(1.0/x)]
    return(1/(ix^(2)))
}
# apply to the last ten rows of column 4 of df1:
Val3_invsq <- sapply(tail(df1[,4],10), inv_sq)
Val3_invsq
 
A data.frame: 403 × 4
  Timestamp Val1 Val2 Val3
  <chr> <chr> <dbl> <dbl>
1 15:50:40.94 7741335 pig attached -0.656250 6.699250
2 15:50:41.08 7741335 pig attached -0.562500 6.770688
3 15:50:41.23 7741335 pig attached -1.265625 7.151688
4 15:50:41.38 7741335 pig attached -0.656250 10.191750
5 15:50:41.51 7741335 pig attached 1.062500 12.961937
6 15:50:41.65 7741335 pig attached -1.296875 15.176500
7 15:50:41.80 7741335 pig attached -0.359375 16.287750
8 15:50:41.95 7741335 pig attached 0.453125 17.907000
9 15:50:42.08 7741335 pig attached 0.265625 19.050000
10 15:50:42.22 7741335 pig attached -0.250000 19.431000
11 15:50:42.37 7741335 pig attached 0.609375 20.193000
12 15:50:42.52 7741335 pig attached -0.578125 19.653250
13 15:50:42.65 7741335 pig attached -1.078125 16.668750
14 15:50:42.79 7741335 pig attached -1.328125 14.541500
15 15:50:42.94 7741335 pig attached 0.296875 13.049250
16 15:50:43.09 7741335 pig attached -0.312500 11.485563
17 15:50:43.22 7741335 pig attached 0.359375 10.977562
18 15:50:43.36 7741335 pig attached 0.781250 10.001250
19 15:50:43.51 7741335 pig attached 0.750000 9.477375
20 15:50:43.66 7741335 pig attached -0.921875 8.763000
21 15:50:43.79 7741335 pig attached -1.531250 8.453437
22 15:50:43.94 7741335 pig attached 0.937500 7.794625
23 15:50:44.08 7741335 pig attached -0.484375 7.572375
24 15:50:44.24 7741335 pig attached 0.906250 7.183438
25 15:50:44.36 7741335 pig attached -0.281250 7.135813
26 15:50:44.51 7741335 pig attached -1.140625 6.437312
27 15:50:44.65 7741335 pig attached 1.140625 6.365875
28 15:50:44.81 7741335 pig attached -0.796875 6.611937
29 15:50:44.93 7741335 pig attached 0.421875 10.183813
30 15:50:45.08 7741335 pig attached -1.312500 12.469812
374 15:51:35.42 7741335 pig attached 0.484375 6.992938
375 15:51:35.56 7741335 pig attached 0.796875 7.167562
376 15:51:35.72 7741335 pig attached 0.015625 6.810375
377 15:51:35.84 7741335 pig attached 1.031250 6.500813
378 15:51:35.99 7741335 pig attached -0.015625 6.588125
379 15:51:36.13 7741335 pig attached -1.093750 8.842375
380 15:51:36.29 7741335 pig attached 0.359375 11.898312
381 15:51:36.41 7741335 pig attached 0.671875 14.001750
382 15:51:36.56 7741335 pig attached 0.390625 15.843250
383 15:51:36.71 7741335 pig attached 0.109375 17.240250
384 15:51:36.86 7741335 pig attached -0.265625 18.161000
385 15:51:36.98 7741335 pig attached -0.500000 19.367500
386 15:51:37.13 7741335 pig attached 0.359375 19.748500
387 15:51:37.28 7741335 pig attached -0.046875 19.716750
388 15:51:37.43 7741335 pig attached -0.281250 17.049750
389 15:51:37.56 7741335 pig attached 1.046875 14.509750
390 15:51:37.70 7741335 pig attached -1.046875 12.596813
391 15:51:37.85 7741335 pig attached 1.000000 11.477625
392 15:51:38.00 7741335 pig attached -0.218750 10.810875
393 15:51:38.13 7741335 pig attached 0.078125 10.104437
394 15:51:38.27 7741335 pig attached 0.265625 9.151938
395 15:51:38.42 7741335 pig attached 1.046875 9.247188
396 15:51:38.57 7741335 pig attached -0.250000 8.628063
397 15:51:38.70 7741335 pig attached 0.265625 8.096250
398 15:51:38.84 7741335 pig attached 0.625000 7.643813
399 15:51:38.99 7741335 pig attached -0.437500 7.381875
400 15:51:39.14 7741335 pig attached 0.500000 6.905625
401 15:51:39.27 7741335 pig attached -1.343750 6.770688
402 15:51:39.41 7741335 pig attached 1.015625 6.667500
403 15:51:39.56 7741335 pig attached -1.000000 6.611937
 
[1] "max of each column:"
 
Timestamp
'15:51:39.56'
Val1
'7741335 pig attached'
Val2
' 1.187500'
Val3
'20.510500'
 
[1] "max of row 4:"
 
$Timestamp
'15:50:41.38'
$Val1
'7741335 pig attached'
$Val2
-0.65625
$Val3
10.19175
 
[1] "min of rows 3 and 4, column 3:"
 
  1. -1.265625
  2. -0.65625
 
  1. 0.01193916375176
  2. 0.0116944734617057
  3. 0.0134330130056421
  4. 0.0152557014072121
  5. 0.0171151200267245
  6. 0.0183512911828691
  7. 0.0209697869521251
  8. 0.0218139574195254
  9. 0.0224943760545117
  10. 0.0228740212686229

Plotting Data

How to plot data in R

In [9]:
# first, load the R Input/Output package
if(!require("rio"))
{
    install.packages("rio")
}

# Use the ggplot2 library.  This is also included in tidyverse.
require("ggplot2")
require("reshape2")

# now we read in the csv data to dataframe
df1 <- import("data/ExampleDataClean2.csv")
df1

# first, add an index column by row
df1$id <- 1:nrow(df1)

# extract subset of data columns to plot, including the index
df = data.frame("id" = df1$id,
                "Val2" = df1$Val2,
                "Val3" = df1$Val3)
# melt the data so each data point has a label and value tied to the index
d <- melt(df, id="id")

# name the melted data to make plotting easier
names(d) <- c('id', 'func', 'val')

# plot the data as lines all at once:
# ggplot() + 
#     geom_line(data=d, aes(x=id, y=val, color=func, group=func))




# alternatively, you could plot like this as well with fewer steps, 
# but much more manually needing to specify each line specifically:
ggplot(data = df1, aes(x=id)) +
    geom_line(aes(y = Val2, color = "Val2")) + 
    geom_line(aes(y = Val3, color = "Val3")) + 
    labs(x = "datapoints", y = "Values")
 
A data.frame: 403 × 4
  Timestamp Val1 Val2 Val3
  <chr> <chr> <dbl> <dbl>
1 15:50:40.94 7741335 pig attached -0.656250 6.699250
2 15:50:41.08 7741335 pig attached -0.562500 6.770688
3 15:50:41.23 7741335 pig attached -1.265625 7.151688
4 15:50:41.38 7741335 pig attached -0.656250 10.191750
5 15:50:41.51 7741335 pig attached 1.062500 12.961937
6 15:50:41.65 7741335 pig attached -1.296875 15.176500
7 15:50:41.80 7741335 pig attached -0.359375 16.287750
8 15:50:41.95 7741335 pig attached 0.453125 17.907000
9 15:50:42.08 7741335 pig attached 0.265625 19.050000
10 15:50:42.22 7741335 pig attached -0.250000 19.431000
11 15:50:42.37 7741335 pig attached 0.609375 20.193000
12 15:50:42.52 7741335 pig attached -0.578125 19.653250
13 15:50:42.65 7741335 pig attached -1.078125 16.668750
14 15:50:42.79 7741335 pig attached -1.328125 14.541500
15 15:50:42.94 7741335 pig attached 0.296875 13.049250
16 15:50:43.09 7741335 pig attached -0.312500 11.485563
17 15:50:43.22 7741335 pig attached 0.359375 10.977562
18 15:50:43.36 7741335 pig attached 0.781250 10.001250
19 15:50:43.51 7741335 pig attached 0.750000 9.477375
20 15:50:43.66 7741335 pig attached -0.921875 8.763000
21 15:50:43.79 7741335 pig attached -1.531250 8.453437
22 15:50:43.94 7741335 pig attached 0.937500 7.794625
23 15:50:44.08 7741335 pig attached -0.484375 7.572375
24 15:50:44.24 7741335 pig attached 0.906250 7.183438
25 15:50:44.36 7741335 pig attached -0.281250 7.135813
26 15:50:44.51 7741335 pig attached -1.140625 6.437312
27 15:50:44.65 7741335 pig attached 1.140625 6.365875
28 15:50:44.81 7741335 pig attached -0.796875 6.611937
29 15:50:44.93 7741335 pig attached 0.421875 10.183813
30 15:50:45.08 7741335 pig attached -1.312500 12.469812
374 15:51:35.42 7741335 pig attached 0.484375 6.992938
375 15:51:35.56 7741335 pig attached 0.796875 7.167562
376 15:51:35.72 7741335 pig attached 0.015625 6.810375
377 15:51:35.84 7741335 pig attached 1.031250 6.500813
378 15:51:35.99 7741335 pig attached -0.015625 6.588125
379 15:51:36.13 7741335 pig attached -1.093750 8.842375
380 15:51:36.29 7741335 pig attached 0.359375 11.898312
381 15:51:36.41 7741335 pig attached 0.671875 14.001750
382 15:51:36.56 7741335 pig attached 0.390625 15.843250
383 15:51:36.71 7741335 pig attached 0.109375 17.240250
384 15:51:36.86 7741335 pig attached -0.265625 18.161000
385 15:51:36.98 7741335 pig attached -0.500000 19.367500
386 15:51:37.13 7741335 pig attached 0.359375 19.748500
387 15:51:37.28 7741335 pig attached -0.046875 19.716750
388 15:51:37.43 7741335 pig attached -0.281250 17.049750
389 15:51:37.56 7741335 pig attached 1.046875 14.509750
390 15:51:37.70 7741335 pig attached -1.046875 12.596813
391 15:51:37.85 7741335 pig attached 1.000000 11.477625
392 15:51:38.00 7741335 pig attached -0.218750 10.810875
393 15:51:38.13 7741335 pig attached 0.078125 10.104437
394 15:51:38.27 7741335 pig attached 0.265625 9.151938
395 15:51:38.42 7741335 pig attached 1.046875 9.247188
396 15:51:38.57 7741335 pig attached -0.250000 8.628063
397 15:51:38.70 7741335 pig attached 0.265625 8.096250
398 15:51:38.84 7741335 pig attached 0.625000 7.643813
399 15:51:38.99 7741335 pig attached -0.437500 7.381875
400 15:51:39.14 7741335 pig attached 0.500000 6.905625
401 15:51:39.27 7741335 pig attached -1.343750 6.770688
402 15:51:39.41 7741335 pig attached 1.015625 6.667500
403 15:51:39.56 7741335 pig attached -1.000000 6.611937