The Panda Kaltura AWS Cluster – Backup

We have covered many topics on creating, managing and lowering the costs using Amazon AWS for a Kaltura video cluster. In this post we will cover backup and recovery using AWS services. When considering backup for your Kaltura AWS cluster, there are several parts to consider:

  • Video storage – which is probably on S3. This is the main body of data. S3 itself provides more than 99.99% guarantees against data loss, with triple replication (double in the case of Reduced Redundancy Storage). However, it is not suitable for backups since it is quite expansive.
  • EC2 instances and EBS Volumes – Your EC2 instances include your application data and code, including configuration files, or in the case of the Kaltura custom modifications to the server code.

There are several backup options, both using AWS services and third party, or writing some script yourself using the AWS API.


The Glacier service provides low cost storage for backup and archiving. It integrates well with S3. Both of the services offer dependable and highly durable storage for the Internet, but while Amazon S3 was designed for rapid retrieval, Glacier, trades off retrieval time for cost, providing storage for as little at $0.01 per Gigabyte per month while retrieving data within three to five hours.

You can set up automatic, policy-driven archiving to Glacier storage as your data ages. This is called a lifecycle rule.
First, you need to tell S3 which objects are to be archived, and under what conditions.

For example, you can set up a prefix for which the rule will apply. Then you set a relative or absolute time specifier and a time period for transitioning objects to Glacier. They can be relative (migrate items that are older than a certain number of days) or absolute (migrate items on a specific date)
An object age at which the object will be deleted from S3.
The objects that are moved to glacier will be deleted from S3, but their metadata will remain there. When listing your S3 bucket objects, you will see in the object storage class one of the following values:

  • STANDARD – 99.999999999% durability. S3’s default storage option.
  • RRS – 99.99% durability. S3’s Reduced Redundancy Storage option.
  • GLACIER – 99.999999999% durability, object archived in Glacier option.

However, you cannot simply use a GET request for the object as you would if it were stored in S3. You need to first RESTORE it. This takes 3 to 5 hours, then it will be available in S3 RRS for a period you specify.


An AMI contains the information for launching an EC2 instance, including user data, installed software and OS configuration. It is a template of the root volume of an instance.

AMIs are best used with EBS backed instances and not with instance store instances due to more support in the API.

EBS Snapshots

EBS, or Elastic Block Store, provides your instances with fast and reliable storage. This is typically where your application code and configuration will be found. Amazon EBS volumes are designed to be highly available and reliable. Amazon EBS volume data is replicated across multiple servers in an Availability Zone to prevent the loss of data from the failure of any single component. EBS volumes are generally 10 times more reliable than typical commodity disk drives.

Amazon EBS provides the ability to create point-in-time consistent snapshots of your volumes that are then stored in Amazon S3, and automatically replicated across multiple Availability Zones. So, taking frequent snapshots of your volume is a convenient and cost effective way to increase the long term durability of your data. In the unlikely event that your Amazon EBS volume does fail, all snapshots of that volume will remain intact, and will allow you to recreate your volume from the last snapshot point.

Amazon EBS snapshots are incremental backups, meaning that only the blocks on the device that have changed since your last snapshot will be saved. If you have a device with 100 GBs of data, but only 5 GBs of data has changed since your last snapshot, only the 5 additional GBs of snapshot data will be stored back to Amazon S3. Even though the snapshots are saved incrementally, when you delete a snapshot, only the data not needed for any other snapshot is removed. So regardless of which prior snapshots have been deleted, all active snapshots will contain all the information needed to restore the volume. In addition, the time to restore the volume is the same for all snapshots, offering the restore time of full backups with the space savings of incremental.

Snapshots can also be used to instantiate multiple new volumes, expand the size of a volume or move volumes across Availability Zones. When a new volume is created, there is the option to create it based on an existing Amazon S3 snapshot. In that scenario, the new volume begins as an exact replica of the original volume.

EBS Snapshots can also be shared, in a manner similar to AMIs. That means you can move them between accounts and regions easily.

When you snapshot an EBS volume, that snapshot is of the entire volume. Even if it was created from an AMI, your snapshot contains everything you need to create a new instance of the volume.

How a backup and restore process usually works on AWS:


  1. Create a AMI Image
  2. Create an EBS snapshot (of the volume attached to the instance)


  1. Create an EBS backed instance using the backup AMI
  2. Attach the EBS volume to instance created

Other options

You can of course create simple script using the AWS command line tools to create snapshots or AMIs using a cron job, and delete old ones.

There are also many third party solutions, which can automate the process for you.