Setting up Kaltura CE Amazon S3 Storage CloudFront CDN – External storage with CDN delivery

Kaltura CE Amazon S3 Storage CloudFront CDN Integration

The purpose of this post is to help you set up the external storage in Kaltura CE5, to work with Amazon S3, and Cloudfront CDN. Keyword: Kaltura CE Amazon S3 Cloudfront CDN

Kaltura CE 5.0 and Amazon S3 external storage

The Kaltura Community Edition (CE) 5.0 comes with built-in support for external storage with Amazon Simple Storage Service (S3). It seemed at first that the integration would be simple and straight forward. Install Kaltura CE, set up the S3 bucket, create a remote distribution profile, and enable the batch job to run the sync. However, I ran into several problems in the Kaltura code, and configuration. I wish to share my findings, and my solution, to a working installation of Kaltura CE 5.0, serving all content from an S3 bucket via the Amazon Cloudfront CDN.

Fixing a few bugs in the Kaltura code

First thing’s first. In order to use external storage, we must modify the batch_config.ini file. Assuming that your kaltura installation resides in /opt/kaltura:

1. Open /opt/kaltura/app/batch/batch_config.ini (make sure your user can write to the file or use “sudo”)

2. find the [KAsyncStorageExport] key

3. modify the “enable” to equal “1”

4. add “params.useS3 = 1” (capital S)

5. repeat steps 3 and 4 for the [KAsyncStorageDelete] key

6. save the file

Great, now we have enabled the batch jobs. Next step is to fix the kaltura database. The field which stores the S3 passkey is a varchar(31) and we need to make it bigger, otherwise the key will be cut in the middle.

1. run “mysql -u{MYSQL_USER} -p{MYSQL_PASS} kaltura”  (substitute {MYSQL_USER} and {MYSQL_PASS} with the user and password for your kaltura db)

2. execute the following query: “alter table storage_profile MODIFY storage_password VARCHAR(60);”

Now we can save the full passkey in the admin console. Great! Lets move on to the Kaltura Admin Console.

Setting up Amazon S3 and getting security credentials

1. To get your Amazon security credentials (assuming you have an account with amazon AWS), go to this link

2. To set up your amazon S3 bucket, go to , create a new bucket, and name it.

3. Inside this bucket, create a folder called “kaltura”

4. Select your new bucket on the left side, click Actions and select “Properties”

5. Add more permissions – Authenticated Users – check all boxes.

6. Select the kaltura folder, click properties, go to Permissions.

7. Add more permissions – Everyone – read and download (you can also right click the folder and select “Make Public”)

Setting up Amazon CloudFront CDN

1. Go to

2. Create a new “Distribution” of type “Download”, and name it

3. Select your bucket as the origin ID, and decide wether you want logging or not.

4. Copy your CloudFront domain name (example: for later use.

Setting up the Remote Storage Profile in the Admin Console

First, you must enable the necessary configuration options for your partner:

1. Find your partner in the list of partners, click on the right drop down box and select “Configure”

2. Under “Remote Storage Policy”, set Delivery Policy to “Remote Storage Only”

3. Check the “Delete exported storage” checkbox.

4. Under Enable/Disable Features, make sure that “Remote Storage” is checked.

5. Click “Save”.

Next we must configure the Remote Storage Profile. In order to do this, we must click on the partner’s left drop-down box (under “Profiles”) and select “Remote Storage”. You should see the “Remote Storage Profiles” page for your publisher (If you haven’t yet set up any remote storage profiles, the list should be empty).

(Assuming that you have already set up an S3 bucket, and that you have an Access Key ID and a Secret Access Key)

1. Create a new profile by writing your publisher id in the right “Publisher ID” input box and clicking “Create New”.

2. Give a name to your Remote Storage (for example “Amazon S3”)

3. For “Storage URL” type http://{yourbucketname} (replace {yourbucketname} with your bucket name on S3)

4. In Storage Base Directory, write “/{yourbucketname}/kaltura” (keep in mind the leading slash, and change yourbucketname to your bucket name)

5. Storage Username – enter your amazon aws api Access Key ID

6. Storage Password – paste your amazon aws api Secret Access Key

7. Under HTTP Delivery Base URL, type “http://{your amazon cloudfront domain}/kaltura” – replace {your amazon cloudfront domain} with the cloudfront domain you created in the previous section).

8. Save the new Remote Storage Profile

Add a crossdomain.xml file

Create a crossdomain.xml file in the root of your S3 bucket

<cross-domain-policy xmlns:xsi="" xsi:noNamespaceSchemaLocation="">
    <allow-access-from domain="*" to-ports="*" secure="false"/>
    <site-control permitted-cross-domain-policies="all"/>
    <allow-http-request-headers-from domain="*" headers="*"/>

Final Step – Enable the remote storage profile

1. Click on the dropdown box next to your new storage profile in the Remote Storage Profiles page in Kaltura Admin Console

2. Select “Export Automatically” and then click “OK”

3. You will receive the confirmation that your storage was autoed 🙂

Test your new configuration

You can go ahead and test your new configuration. Upload a new video in the KMC, let it convert, and wait for it to get distributed. After that, try to play the entry and analyse it in your favorite sniffer. You should see that the movies are being downloaded from your cloudfront CDN, look for flv and mp4 files.

Good Luck

Leon Gordin

Co Founder, PandaOS