Home » Categories » Linux Cloud Server » UBuntu |
How To Install and Use Docker: Getting Started |
Article Number: 258 | Rating: Unrated | Last Updated: Mon, Jan 6, 2014 at 12:44 AM
|
Docker Whether it be from your development machine to a remote server for production, or packaging everything for use elsewhere, it is always a challenge when it comes to porting your application stack together with its dependencies and getting it to run without hiccups. In fact, the challenge is immense and solutions so far have not really proved successful for the masses. In a nutshell, docker as a project offers you the complete set of higher-level tools to carry everything that forms an application across systems and machines - virtual or physical - and brings along loads more of great benefits with it. Docker achieves its robust application (and therefore, process and resource) containment via Linux Containers (e.g. namespaces and other kernel features). Its further capabilities come from a project's own parts and components, which extract all the complexity of working with lower-level linux tools/APIs used for system and application management with regards to securely containing processes. The Docker Project and its Main PartsDocker project (open-sourced by dotCloud in March '13) consists of several main parts (applications) and elements (used by these parts) which are all [mostly] built on top of already existing functionality, libraries and frameworks offered by the Linux kernel and third-parties (e.g. LXC, device-mapper, aufs etc.). Main Docker Parts
Main Docker Elements
Docker ElementsThe following elements are used by the applications forming the docker project. Docker ContainersThe entire procedure of porting applications using docker relies solely on the shipment of containers. Docker containers are basically directories which can be packed (e.g. tar-archived) like any other, then shared and run across various different machines and platforms (hosts). The only dependency is having the hosts tuned to run the containers (i.e. have docker installed). Containment here is obtained via Linux Containers (LXC). LXC (Linux Containers) Linux Containers can be defined as a combination various kernel-level features (i.e. things that Linux-kernel can do) which allow management of applications (and resources they use) contained within their own environment. By making use of certain features (e.g. namespaces, chroots, cgroups and SELinux profiles), the LXC contains application processes and helps with their management through limiting resources, not allowing reach beyond their own file-system (access to the parent's namespace) etc. Docker with its containers makes use of LXC, however, also brings along much more. Docker ContainersDocker containers have several main features. They allow;
and more, requiring much less resources than traditional virtual-machines used for isolated application deployments. They do not allow;
and (also) more. Being based and depending on LXC, from a technical aspect, these containers are like a directory (but a shaped and formatted one). This allows portability and gradual builds of containers. Each container is layered like an onion and each action taken within a container consists of putting another block (which actually translates to a simple change within the file system) on top of the previous one. And various tools and configurations make this set-up work in a harmonious way altogether (e.g. union file-system). What this way of having containers allows is the extreme benefit of easily launching and creating new containers and images, which are thus kept lightweight (thanks to gradual and layered way they are built). Since everything is based on the file-system, taking snapshots and performing roll-backs in time are cheap (i.e. very easily done / not heavy on resources), much like version control systems (VCS). Each docker container starts from a docker image which forms the base for other applications and layers to come. Docker ImagesDocker images constitute the base of docker containers from which everything starts to form. They are very similar to default operating-system disk images which are used to run applications on servers or desktop computers. Having these images (e.g. Ubuntu base) allow seamless portability across systems. They make a solid, consistent and dependable base with everything that is needed to run the applications. When everything is self-contained and the risk of system-level updates or modifications are eliminated, the container becomes immune to external exposures which could put it out of order -preventing the dependency hell. As more layers (tools, applications etc.) are added on top of the base, new images can be formed by committing these changes. When a new container gets created from a saved (i.e. committed) image, things continue from where they left off. And the union file system, brings all the layers together as a single entity when you work with a container. These base images can be explicitly stated when working with the docker CLI to directly create a new container or they might be specified inside a Dockerfile for automated image building. DockerfilesDockerfiles are scripts containing a successive series of instructions, directions, and commands which are to be executed to form a new docker image. Each command executed translates to a new layer of the onion, forming the end product. They basically replace the process of doing everything manually and repeatedly. When a Dockerfile is finished executing, you end up having formed an image, which then you use to start (i.e. create) a new container. How To Install DockerAt first, docker was only available on Ubuntu. Nowadays, with its most recent release (0.7.1. dating 5 Dec.), it is possible to deploy docker on RHEL based systems (e.g. CentOS) and others as well. Installation Instructions for UbuntuThe simplest way to get docker, other than using the pre-built application image, is to go with a 64-bit Ubuntu 13.04 server Update:
Make sure aufs support is available:
Add docker repository key to apt-key for package verification:
Add the docker repository to aptitude sources:
Update the repository with the new addition:
Finally, download and install docker:
Ubuntu's default firewall (UFW: Uncomplicated Firewall) denies all forwarding traffic by default, which is needed by docker. Enable forwarding with UFW: Edit UFW configuration using the nano text editor.
Scroll down and find the line beginning with DEFAULT_FORWARD_POLICY. Replace:
With:
Press CTRL+X and approve with Y to save and close. Finally, reload the UFW:
How To Use DockerOnce you have docker installed, its intuitive usage experience makes it very easy to work with. By now, you shall have the docker daemon running in the background. If not, use the following command to run the docker daemon. To run the docker daemon:
Usage Syntax: Using docker (via CLI) consists of passing it a chain of options and commands followed by arguments. Please note that docker needs sudo privileges in order to work.
Note: Below instructions and explanations are provided to be used as a guide and to give you an overall idea of using and working with docker. The best way to get familiar with it is practice on a new server. Do not be afraid of breaking anything– in fact, do break things! With docker, you can save your progress and continue from there very easily. BeginningLet's begin with seeing all available commands docker have. Ask docker for a list of all available commands:
Check out system-wide information and docker version:
Working with ImagesAs we have discussed at length, the key to start working with any docker container is using images. There are many freely available images shared across docker image index and the CLI allows simple access to query the image repository and to download new ones.
Searching for a docker image:*
This will provide you avery longlist of all available images matching the query: Ubuntu. Downloading (PULLing) an image: Either when you are building / creating a container or before you do, you will need to have an image present at the host machine where the containers will exist. In order to download images (perhaps following "search”) you can execute pull to get one.
Listing images: All the images on your system, including the ones you have created by committing (see below for details), can be listed using "images”. This provides a full list of all available ones.
Committing changes to an image: As you work with a container and continue to perform actions on it (e.g. download and install software, configure files etc.), to have it keep its state, you need to "commit”. Committing makes sure that everything continues from where they left next time you use one (i.e. an image).
Sharing (PUSHing) images: Although it is a bit early at this moment -in our article, when you have created your own container which you would like to share with the rest of the world, you can use push to have your image listed in the index where everybody can download and use.
Note:You need to sign-up at index.docker.io to push images to docker index. Working with ContainersWhen you "run" any process using an image, in return, you will have a container. When the process is not actively running, this container will be a non-running container. Nonetheless, all of them will reside on your system until you remove them via rm command. Listing all current containers: By default, you can use the following to list all running containers:
To have a list of both running and non-running ones, use:
Creating a New ContainerIt is currently not possible to create a container without running anything (i.e. commands). To create a new container, you need to use a base image and specify a command to run.
Running a container: When you create a container and it stops (either due to its process ending or you stopping it explicitly), you can use "run” to get the container working again with the same command used to create it.
Stopping a container: To stop a container's process from running:
Saving (*commit*ting) a container: If you would like to save the progress and changes you made with a container, you can use "commit” as explained above to save it as an image.
Remember that with docker, commits are cheap. Do not hesitate to use them to create images to save your progress with a Removing / Deleting a container: Using the ID of a container, you can delete one with rm.
|
Attachments
There are no attachments for this article.
|
How to Setup a Multi-Protocol VPN Server Using SoftEther
Viewed 9581 times since Thu, Jan 2, 2014
How To Use the Web2py Framework to Quickly Build Your Python App
Viewed 4577 times since Sat, Jan 4, 2014
How To Set Up vsftpd on Ubuntu 12.04
Viewed 2236 times since Thu, Dec 26, 2013
How to Install OpenVPN Access Server on Ubuntu 12.04
Viewed 3394 times since Thu, Jan 2, 2014
How To Install Rails, Apache, and MySQL on Ubuntu with Passenger
Viewed 3195 times since Mon, Dec 23, 2013
How To Configure Secure Updates and Installations in WordPress on Ubuntu
Viewed 5203 times since Sat, Jan 4, 2014
Top 10 Linux Easter Eggs
Viewed 7327 times since Sat, Jan 4, 2014
How To Install Apache Tomcat on Ubuntu 12.04
Viewed 3757 times since Mon, Dec 23, 2013
How To Use Top, Netstat, Du, & Other Tools to Monitor Server Resources
Viewed 6737 times since Sat, Jan 4, 2014
How To Install and Use Memcache on Ubuntu 12.04
Viewed 3527 times since Tue, Dec 24, 2013
|