The LoopBack storage component enables you to upload and download files to cloud storage providers and the local (server) file system. It has Node.js and REST APIs.
Page Contents

Overview

The LoopBack storage component makes it easy to upload and download files to cloud storage providers and the local (server) file system.  It has Node.js and REST APIs for managing binary content in cloud providers, including:

  • Amazon
  • Azure
  • Google Cloud
  • Openstack
  • Rackspace

You use the storage component like any other LoopBack data source such as a database. Like other data sources, it supports create, read, update, and delete (CRUD) operations with exactly the same LoopBack and REST APIs.

Installation

Install the storage component as usual for a Node package:

$ npm install loopback-component-storage

Example

For an example of using the storage component, see https://github.com/strongloop/loopback-example-storage

Follow these steps to run the LoopBack 2.x example:

$ git clone https://github.com/strongloop/loopback-example-storage.git
$ cd loopback-example-storage/example-2.0
$ npm install
$ node .

Then load http://localhost:3000 in your browser.

Containers and files

The storage component organizes content as containers and files. A container holds a collection of files, and each file belongs to one container.

  • Container groups files, similar to a directory or folder. A container defines the namespace for objects and is uniquely identified by its name, typically within a user account. NOTE: A container cannot have child containers.
  • File stores the data, such as a document or image. A file is always in one (and only one) container. Within a container, each file has a unique name. Files in different containers can have the same name. By default, files with the same name will overwrite each other.

Creating a storage component data source

You can create a storage component data source either using the command-line tools and the /server/datasources.json file or programmatically in JavaScript.

Using CLI and JSON

Create a new data source as follows:

$ lb datasource
[?] Enter the data-source name: myfile
[?] Select the connector for myfile: other
[?] Enter the connector name without the loopback-connector- prefix: loopback-component-storage
[?] Install storage (Y/n)

Using IBM API Connect v5 developer toolkit, use this command:

$ apic create --type datasource
...

Then edit /server/datasources.json and manually add the properties of the data source (properties other than “name” and “connector”.

For example:

"myfile": {
  "name": "myfile",
  "connector": "loopback-component-storage",
  "provider": "amazon",
  "key": "your amazon key",
  "keyId": "your amazon key id"
}

Using JavaScript

You can also create a storage component data source programmatically with the loopback.createDataSource() method, putting code in /server/server.js.  For example, using local file system storage:

server/server.js

var ds = loopback.createDataSource({
    connector: require('loopback-component-storage'),
    provider: 'filesystem',
    root: path.join(__dirname, 'storage')
});

var container = ds.createModel('container');

Here’s another example, this time for Amazon:

server/server.js

var ds = loopback.createDataSource({
  connector: require('loopback-component-storage'),
  provider: 'amazon',
  key: 'your amazon key',
  keyId: 'your amazon key id'
});
var container = ds.createModel('container');
app.model(container);

You can also put this code in the /server/boot directory, as an exported function:

module.exports = function(app) { 
  // code to set up data source as shown above 
};

Provider credentials

Each cloud storage provider requires different credentials to authenticate. Provide these credentials as properties of the JSON object argument to createDataSource(), in addition to the connector property, as shown in the following table.

Provider Property Description Example
Amazon provider 'amazon'
{
  provider: 'amazon',
  key: '...',
  keyId: '...'
}
key Amazon key
keyId Amazon key ID
Rackspace provider 'rackspace'
{
  provider: 'rackspace',
  username: '...',
  apiKey: '...'
}
username Your username
apiKey Your API key
Azure provider 'azure'
{
 provider: 'azure',
 storageAccount: '...',
 storageAccessKey: '...'
}
storageAccount Name of your storage account
storageAccessKey Access key for storage account
OpenStack provider 'openstack'
{
 provider: 'openstack',
 username: '...',
 password: '...',
 authUrl: 'https://your-identity-service'
}
username Your username
password Your password
authUrl Your identity service
Google Cloud provider 'google'
{
 provider: 'google',
 keyFilename: 'path/to/keyfile.json',
 projectId: '...',
 nameConflict: 'makeUnique'
}
keyFilename Path to key file.
projectID Google Cloud project ID.
Local File System provider 'filesystem'
{
  provider: 'filesystem',
  root: '/tmp/storage',
  nameConflict: 'makeUnique'
}
root File path to storage root directory.

Automatic Unique Filenames

As documented above, a file uploaded with the same name as an existing file will be overwritten. By adding the configuration key nameConflict with the value makeUnique, files will automatically be renamed with a UUID and the existing file extension of the original file name. Most likely this is an option that you will want to enable, otherwise you will need to ensure uniqueness on the code calling the API.

Renaming files

You can rename files being uploaded before they reach their destination - file system or cloud - by setting the getFilename function in its datasource options. This can be done either in a model file or in a boot script, or anywhere you can access the datasource object.

getFilename has a signature of getFilename(uploadingFile, req, res), where uploadingFile is an object containing the details of the uploading file, req is the request object, and res is the response object.

The string returned by getFilename will become the new name of the file.

Below are two examples of setting the getFilename option for a datasource named files.

  1. Renaming using a model file (User is the model):
module.exports = function(User) {
  User.getApp(function (err, app) {
    if (err) return err;
    app.dataSources.files.connector.getFilename = function(uploadingFile, req, res) {
      return Math.random().toString().substr(2) + '.jpg';
    };
  });
};
  1. Renaming using a boot script (./server/boot/storage-config.js):
module.exports = function(app) {
  app.dataSources.files.connector.getFilename = function(uploadingFile, req, res) {
    return Math.random().toString().substr(2) + '.jpg';
  };
};

API

Once you create a container, it will provide both a REST and Node API, as described in the following table. For details, see the complete API documentation.

Description Container Model Method REST URI
List all containers. getContainers(cb) GET
/api/containers
Get information about specified container. getContainer(container, cb) GET
/api/containers/:container
Create a new container. createContainer(options, cb) POST
/api/containers
Delete specified container. destroyContainer(container, cb) DELETE
/api/containers/:container
List all files within specified container. getFiles(container, download, cb) GET
/api/containers/:container/files
Get information for specified file within specified container. getFile(container, file, cb) GET
/api/containers/:container/files/:file
Delete a file within a given container by name. removeFile(container, file, cb) DELETE /api/containers/:container/files/:file
Upload one or more files into the specified container. The request body must use multipart/form-data which is the file input type for HTML uses. upload(container, req, res, cb) POST
/api/containers/:container/upload
Download a file within specified container. download(container, file, req, res, cb) GET
/api/containers/:container/download/:file
Get a stream for uploading. uploadStream(container, file, options, cb)  
Get a stream for downloading. downloadStream(container, file, options, cb)  
Tags: