Timelapse Cloud

I’ve got a timelapse camera which takes photos. It’s a raspberry pi running linux. I want a website with videos of these timelapse images without manually making them. Here’s my working.


Storage: AWS S3 bucket:

  • Use it to store images, videos and a client-side web server.
  • rclone can easily write/sync to S3 from the timelapse camera

Process: Spot instance from AWS

  • Making movies is processor intensive, so use spot.
  • Use a fleet with c4.medium, t3.medium instance, etc. 2 vCPUs…
  • or c5
  • Mount the bucket as a local drive

User Interface: Web page

  • Client-side javascript to display images and videos
  • Configure s3 to serve static pages
  • Use basic “file-access” javascript classes to interrogate s3 bucket or http server to get file and directory listings.


  • One day (10h) is 1200 images (30s interval).
  • At 25fps this is 48s of video.
  • Photos are 1920×1080, 90% quality => 300k each
  • One day of photos is 360M
  • One day video is 100M (x264 defaults)
  • One day video, dashify’d (360p, 720, 1080p) is 120M
  • Takes 5 minutes to make a day’s video

Setup Steps

Mount the bucket

sudo apt-get update
sudo apt-get install s3fs # no! need to build from source
sudo apt install awscli
aws configure
vim ~/.passwd-s3fs # and add USER:SECRET

Unfortunately the apt-get s3fs didn’t work, so I built from source (no probs) following manual and directions.

s3fs bucket.name /home/ubuntu/tmv -o use_cache=/tmp  -o default_acl=public-read -o use_path_request_style -o url=https://s3-ap-southeast-2.amazonaws.com

Usually you add this to /etc/fstab, but dots in names caused problems. Instead, add as a script to /etc/rc.local. I had run as a user:

vi /etc/init.d/rc.local  
sudo -u ubuntu s3fs bucket.name /home/ubuntu/tmv -o use_cache=/tmp -o default_acl=public-read -o use_path_request_style -o url=https://s3-ap-southeast-2.amazonaws.com

To make created files public read-only by default, add the following policy:

"Version": "2008-10-17",
"Statement": [
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": ""
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket-name/"

Install ffmpeg

Sadly, this packaged version doesn’t work as expected.

sudo snap install ffmpeg # dont work!

Instead, compile as per the manual, with minimal libs – mainly libx264 and libx265. Final compile was :

./configure   --prefix="$HOME/ffmpeg_build"   --pkg-config-flags="--static"   --extra-cflags="-I$HOME/ffmpeg_build/include"   --extra-ldflags="-L$HOME/ffmpeg_build/lib"   --extra-libs="-lpthread -lm"   --bindir="$HOME/bin"   --enable-gpl  --enable-libass     --enable-libfreetype      --enable-libx264   --enable-libx265   --enable-nonfree

Then I just symlinked the ffmpeg binary at ~/ffmpeg_build to /usr/local/bin

Install pi-timolo and tlmm to convert images to videos

# Get python3 setup
sudo apt install python3
python3 -m pip install nptime numpy Pillow exifread
sudo update-alternatives --install /usr/bin/python python /usr/bin/python3.6 1
sudo update-alternatives --config python
# Install timelapse movie makers (which call ffmpeg)
git clone https://github.com/brettbeeson/tlmm.git
git clone https://github.com/brettbeeson/pi-timolo.git
# install with simple symlinks
cd /usr/local/bin
ln -s ~/tlmm/tlinfo.py # gets # frames, fps, etc
ln -s ~/tlmm/filebyday.py # sort the incoming photos into folders
ln -s ~/tlmm/dashify.py # make dash streaming vids
ln -s ~/tlmm/tlconcat.py # join videos

And install capability for streaming with MP4Box. This is used by dashify.sh.

Setup cron to convert images to videos

# Add a minimal path for cron (usr/local/bin is added)

*/5 * * * * /home/ubuntu/pi-timolo/filebyday.sh /home/ubuntu/tmv/picam/ >> /home/ubuntu/tmv/picam/filebyday.log
0 * * * * /home/ubuntu/pi-timolo/makedailymovies.sh /home/ubuntu/tmv/picam/ >> /home/ubuntu/tmv/picam/makedailymovies.log
0 2 * * * /home/ubuntu/pi-timolo/makelongermovies.sh /home/ubuntu/tmv/picam/ >> /home/ubuntu/tmv/picam/makelongermovies.log

Setup webserver

Just copy files to ~/tmv (i.e. your s3 bucket root).

Added default to public-readonly when mounting s3 (above). If you get 403 errors, use “Make Public” on the S3 AWS console and check permissions.


Once we’re setup, we can write the app.

Some architecture options:

  1. Server: Use a server to interrogate filesystem and server web pages.
  2. Serverless: Use Javascript and AWS S3 SDK to interrogate the bucket’s contents directly.

A serverless method is simpler and was used. In both cases, we need a compute server to combine the images into videos, and combine videos into longer videos. ( There is the possibility of a “lambada” AWS function, but I didn’t explore this. It eliminates the need for a seperate server. Explore later.)

Instead, a spot instance uses “S3 sync” to periodically get new photos, combine them into videos and push them back to the bucket. The web page (running client-side from S3 served files) interrogates the bucket for the latest photo/video/etc and displays it.


The tutorial and docs are good. IAM and Cognito are okay to setup. Some pointers to get the server to filter, instead of the client. This is essential for lots of files in the a bucket.

  • prefix: (this/is/a/prefix/file.txt) works like a directory
  • delimiter: (“/”) use “/” to emulate folders
  • common-prefixes: returns “folders” under a specific prefix (folder)
  • marker: alphabetic start point
  • use listObjectsV2 and continuationToken instead of marker
  • need to use async (aka Promises) and await the results of a call like listObjects.

Leave a Reply

Your email address will not be published. Required fields are marked *