LXD Driver
Even before LXD gained its new powerful storage API that allows LXD to administer multiple storage pools one frequent request was to extend the range of available storage drivers (btrfs, dir, lvm, zfs) to include Ceph. Now we are happy to announce that we fulfilled this request. As of release 2.16 LXD comes with a Ceph storage driver.
The command line experience for Ceph is similar to the other storage drivers. Anyone who has played with the storage API should feel at home right away. Without going into too much detail of the inner workings of Ceph itself there are a few details one should keep in mind. LXD itself is not concerned with administering the Ceph cluster itself. Instead, LXD can be used to create and administer OSD storage pools in an existing Ceph cluster. The OSD storage pool is then used by LXD to create RBD storage volumes for images, containers, and snapshots just with any other storage driver.
Creating OSD storage pools in Ceph clusters
Like any other storage driver the Ceph storage driver is supported through lxd
. So creating a ceph storage pool becomes as easy as this:
init
As of release 2.16 LXD comes with a Ceph storage driver. The command line experience for Ceph is similar to the other storage drivers. Anyone who has played with the storage API should feel at home right away. Without going into too much detail of the inner workings of Ceph itself, there are a few details one should keep in mind. Aug 30, 2017 As of release 2.16 LXD comes with a Ceph storage driver. The command line experience for Ceph is similar to the other storage drivers. Anyone who has played with the storage API should feel at home right away. Without going into too much detail of the inner workings of Ceph itself, there are a few details one should keep in mind.
For more advanced use cases it’s possible to use our lxc storage
command line tool to create further OSD storage pools in a Ceph cluster. Users have the ability to fine tune several parameters when doing so. For example, it is possible to specify the Ceph user via the ceph.user.name
and the cluster to use via ceph.cluster_name
. So say you wanted to create a new OSD storage pool in the cluster my-cluster
for a user called my-user
. This can be done by using
In the following asciinema I’m going to use the default admin
for ceph.user.name
and ceph
for ceph.cluster_name
just to illustrate the use of these properties when creating a new OSD storage pool. I will also make use of the osd.osd.pool_name
property. This is useful to tell LXD that the internal name LXD uses to represent the OSD storage pool to the user is supposed to be different from the name of the OSD storage pool itself. Usually this is useful when either another OSD storage pool of the same name that you would like LXD to use already exists on disk or when LXD uses the name of the OSD storage pool you would like it to have on disk is already in use by LXD. The final property I’m going to specify is ceph.osd.pg_num
to specify the number of placement groups that I want the OSD storage pools to use:
Creating images, containers, snapshots on a OSD storage pool
Now that we have created two OSD storage pools we are ready to create containers in them. Let’s see if it all goes well.
OSD storage pools use the RBD kernel driver to create and administer storage volumes. RBD storage volumes are conceptually similar to LVM logical volumes and ZFS datasets. They share some properties with both. Similiar to logical volumes, RBD storage volumes are block devices. This means the user can determine which filesystem to use for the storage volumes that are created. By default, LXD will use ext4
for all new storage volumes but it is possible to tell LXD to use xfs
instead. Let’s create a new storage pool that uses xfs as its default filesystem for all new storage volumes:
But as I said RBD also shares features that make it similar to ZFS. For example, RBD supports the concept of clones. Clones are space-efficient storage volumes based on protected snapshots made of other storage volumes. Internally this leads to a more complex storage pool structure but LXD is smart enough to figure out the right dependencies and keeps track of any storage volumes that need to be kept around even if the container has been deleted. The good news is that not just are these clones space efficient they also are super fast. Let’s try to copy an already existing container. LXD will use RBD clones for that:
Summary
By adding the Ceph storage driver to the storage API LXD gains support for distributed storage. This makes LXD even more suitable for use in critical production environments and in using containers at a very large scale. Administration is easy and intuitive through our storage API. I hope that this short introduction has given you a good impression on what the Ceph storage driver is currently capable of. We have more documentation available in our Github repository and are always open to feature requests and happy to lend support. The Ceph storage driver was fun to implement. I hope you have as much fun using it as I had writing it.
Nomad Lxc Driver
Take care
Christian
Last updated: August 17th 2020
At long last we have gotten full Docker compatibility with Docker in our LXD containers after much kernel tweaking and upstream fixes.
The steps required to get Docker running in our VPS's are as follows:
1. Provision a Webdock Ubuntu (latest available version) base image, create a sudo shell user and connect via SSH
- You should not create a Micro container as there will be too little disk space available for Docker to function. Go for Micro+ or larger.
2. Follow the official instructions on how to install Docker-CE
3. Test that Docker works :)
If you get an error of some kind, try running docker info and make sure that the Storage driver listed is vfs - it really should be - but if it isn't try the following steps:
1. 'Hack' the containerd service to prevent it from looking for the overlay fs driver
- Run
systemctl edit containerd.service
. This automatically creates a draft override file, and opens it in your editor. - Edit the file to look like this;
- Save the file
- Now run:
2. Configure Docker to use VFS as the filesystem driver
Lxd Driver Guide
- Edit
/etc/docker/daemon.json
. If it does not yet exist, create it. Assuming that the file was empty, add the following contents.
If you are using docker-compose, you may need to tell it where the Docker daemon lives:
Kitchen Lxc Driver
Webdock now fully supports nested LXD containers. LXD is similar in functionality to Docker, and is a great alternative.
To create an LXD container in your Webdock server, simply initialize LXD and accept all the defaults (comes pre-installed on all our Ubuntu systems) and off you go. The setup process would look something like the following. Please note the security.nesting=true in the launch / init command parameters: