LXC vs LXD vs Docker
- LXC, LXD, Docket http://unix.stackexchange.com/questions/254956/what-is-the-difference-between-docker-lxd-and-lxc
- LXC https://linuxcontainers.org/
- LXC https://wiki.archlinux.org/index.php/Linux_Containers
- http://www.docker.com/
- LXD http://blog.scottlowe.org/2015/05/06/quick-intro-lxd/
- OpenStack vs KVM vs Docker https://www.mirantis.com/blog/need-openstack-use-docker/
Linux Container technologies:
- LXC
- OpenVZ
- Linux-VServer
- FreeBSD jail
- Solaris Zones
- Docker specializes in deploying apps (it encapsulates an app and its identity)
- LXD specializes in deploying (Linux) Virtual Machines (it acts like a Linux virtual machine)
Accroding to RedHat, Containers using the libvirt-lxc tooling have been deprecated from RHEL7.1. Linux containers framework is now based on the docker command-line interface.
Linux containers are essentially the clone() system call, SELinux, and Cgroups; whether they are of a LXC or Docker (through libcontainer) type is mostly irrelevant as the Linux kernel itself “does” the process isolation.
--------------------
Docker Howto:
- Docker Cheat Sheet
- https://docs.docker.com/engine/userguide/containers/dockerizing/
- https://realpython.com/blog/python/django-development-with-docker-compose-and-machine/
- http://rhelblog.redhat.com/2016/03/16/container-tidbits-when-should-i-break-my-application-into-multiple-containers/#more-1632
- http://www.tecmint.com/install-docker-and-learn-containers-in-centos-rhel-7-6/
Tutorials:
- How to Install and Use Docker Getting Started
- How To Containerize and Use Nginx as a Proxy
- How To Serve Django Applications with uWSGI and Nginx on Ubuntu 16.04
- Howto create a Docker Image for RHEL
- Deploying Python Applications with Docker
- A clean way to locally install python dependencies with pip in Docker
use "pip --user" not "pip --target" to install package for upgradability
Docker Commands
Here is a summary of currently available (version 0.7.1) docker commands:
attach: Attach to a running container
build: Build a container from a Dockerfile
commit: Create a new image from a container's changes
cp: Copy files/folders from the containers filesystem to the host path
diff: Inspect changes on a container's filesystem
events: Get real time events from the server
export: Stream the contents of a container as a tar archive
history: Show the history of an image
images: List images
import: Create a new filesystem image from the contents of a tarball
info: Display system-wide information
insert: Insert a file in an image
inspect: Return low-level information on a container
kill: Kill a running container
load: Load an image from a tar archive
login: Register or Login to the docker registry server
logs: Fetch the logs of a container
port: Lookup the public-facing port which is NAT-ed to PRIVATE_PORT
ps: List containers
pull: Pull an image or a repository from the docker registry server
push: Push an image or a repository to the docker registry server
restart: Restart a running container
rm: Remove one or more containers
rmi: Remove one or more images
run: Run a command in a new container
save: Save an image to a tar archive
search: Search for an image in the docker index
start: Start a stopped container
stop: Stop a running container
tag: Tag an image into a repository
top: Lookup the running processes of a container
version: Show the docker version information
docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
-a | attach to STDIN |
-d | run container in background |
-h | container hostname |
-i, --interactive | keep STDIN open even if not attached |
--ip="" | container ipv4 address |
--link=[] | add link to another container |
--name="" | Assign a name to the container |
-P, --publish-all | Publish all exposed ports to random ports |
-p, --publish=[] | publish a container's ports to the host |
--rm | Automatically remove the container when it exists |
-u, --user="<name|uid>[:<group|gid>]" | username |
-v, --volume=[host-src:]container-dest[:<options>] | Bind mount a volume. The comma-delimited `options` are [rw|ro], [z|Z], [[r]shared|[r]slave|[r]private], and [nocopy]. The 'host-src' is an absolute path or a name value. |
--volumes-from=[] | Mount volumes from the specified container(s) |
-w, --workdir="" | Working directory inside the container |
The
-p
flag can take a few different formats:ip:hostPort:containerPort| ip::containerPort | hostPort:containerPort | containerPort
Essentially, you can omit either ip or hostPort, but you must always specify a containerPort to expose. Docker will automatically provide an ip and hostPort if they are omitted. Additionally, all of these publishing rules will default to tcp. If you need udp, simply tack it on to the end such as
-p 1234:1234/udp
.
============
Docker images are store under
/var/lib/docker
directory vary depending on the driver Docker is using for storage. In most places this will be aufs
but the RedHats went with devicemapper
. You can manually set the storage driver with the -s
or --storage-driver=
option to the Docker daemon./var/lib/docker/{driver-name}
will contain the driver specific storage for contents of the images./var/lib/docker/graph/<id>
now only contains metadata about the image, in thejson
andlayersize
files.
In the case of
aufs
:/var/lib/docker/aufs/diff/<id>
has the file contents of the images./var/lib/docker/repositories-aufs
is a JSON file containing local image information. This can be viewed with the commanddocker images
.
In the case of
devicemapper
:/var/lib/docker/devicemapper/devicemapper/data
stores the images/var/lib/docker/devicemapper/devicemapper/metadata
the metadata- Note these files are thin provisioned "sparse" files so aren't as big as they seem.
The images are stored in
/var/lib/docker/graph/<id>/layer
.
Note that images are just diffs from the parent image. The parent ID is stored with the image's metadata
/var/lib/docker/graph/<id>/json
.
When you
-------
* Increase the storage disk for one container which defaults to 10G
* Increase the total data space used by docker on your platform, which defaults to (type ‘docker info’): Data Space Total: 107.4 GB
docker run
an image. AUFS will 'merge' all layers into one usable file system.-------
* Increase the storage disk for one container which defaults to 10G
* Increase the total data space used by docker on your platform, which defaults to (type ‘docker info’): Data Space Total: 107.4 GB
# docker info
...
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Total: 107.4 GB
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Total: 107.4 GB
...
Data loop file: /home/docker/devicemapper/devicemapper/data
Metadata loop file: /home/docker/devicemapper/devicemapper/metadata
Modify docker config file "/etc/sysconfig/docker" option:
OPTIONS='--selinux-enabled=false --storage-opt dm.no_warn_on_loop_devices=true –storage-opt dm.basesize=400G -g /home/docker'
Create 400G data file:
#
dd
if
=
/dev/zero
of=
/home/docker/devicemapper/devicemapper/data
bs=1G count=0 seek=400
Docker default ip is 172.17.42.1/16 assign to virtual interface docker0. But
docker0
is no ordinary interface. It is a virtual Ethernet bridge that automatically forwards packets between any other network interfaces that are attached to it. This lets containers communicate both with the host machine and with each other. Every time Docker creates a container, it creates a pair of “peer” interfaces that are like opposite ends of a pipe — a packet sent on one will be received on the other. It gives one of the peers to the container to become its eth0
interface and keeps the other peer, with a unique name likevethAQI2QT
, out in the namespace of the host machine. By binding every veth*
interface to the docker0
bridge, Docker creates a virtual subnet shared between the host machine and every Docker container.
Docker is based on so called images. These images are comparable to virtual machine images and contain files, configurations and installed programs. And just like virtual machine images you can start instances of them. A running instance of an image is called container. You can make changes to a container (e.g. delete a file), but these changes will not affect the image. However, you can create a new image from a running container (and all it changes) using
docker commit <container-id> <new-image-name>
. Export is used to persist a container, and save is used to persist a image. export
will give you a flat .tar archive containing your container filesystem, all the metadata will be lost, so in case you try to run the container with that image you have remention the CMD and other metdata. (Export-Import creates new Container where as the Save-Load shows time of the original file created.)docker save
and docker load
will preserve image metadata (CMD, ENTRYPOINT, etc) and all layers.docker export
and docker import
don't preserve metadata. This is by design and it's not being changed.docker export
does not export everything about the container — just the filesystem. So, when importing the dump back into a new docker image, additional flags need to be specified to recreate the context.
docker export - saves a container’s running or paused instance to a file
docker save - saves a non-running container image to a file
# sudo docker export <CONTAINER ID> > /home/export.tar
# sudo docker save <image> > /home/save.tar
In rhel 7, create static route file:
# cat /etc/sysconfig/network-scripts/route-em2
172.17.117.0/24 via 172.16.131.1 dev em2
Then the docker interface ip become 172.18.0.1/16
- yum install docker
# yum install -y docker# systemctl disable firewalld && systemctl stop firewalld
# systemctl enable docker && systemctl start docker
# docker version
# docker info
. to install a CentOS 7 distribution
# docker pull centos:centos7
. To display the list of locally available images
# docker images
. Show all containers (default shows just running)
# docker ps -a
. To test your new image
# docker run centos:centos7 /bin/ping google.com -c 2
. create a container
# mkdir -p /var/www/html
# restorecon -R /var/www
# docker run -d -p 8000:8000 --name="python_web" -v /usr/sbin:/usr/sbin -v /usr/bin:/usr/bin -v /usr/lib64:/usr/lib64 -w /var/www/html -v /var/www/html:/var/www/html centos:centos7 /bin/python -m SimpleHTTPServer 8000
# netstat -tupln | grep 8000
. stop/remove all docker containers (in bash)
$ docker stop $(docker ps -a -q)
$ docker rm $(docker ps -a -q)
. clean up old containers
$ docker ps -a | grep 'weeks ago' | awk '{print $1}' | xargs --no-run-if-empty docker rm
. copy docker image to another machine
# docker save -o <save image to path> <image name>
Or # docker save <image name> > saved.tar
# docker load -i <path to image tar file>
Or # docker save <image> | bzip2 | ssh user@host 'bunzip2 | docker load'
Or # docker save <image> | gzip -9 -c > image.tgz
# gunzip -c image.tgz | docker load
. start a stopped container# docker start -ai mad_brattain # by name
. attach to a running container
or to different shell
# docker exec -i -t 665b4a1e17b6 /bin/bash
. script to auto upgrade docker images http://stackoverflow.com/questions/26423515/how-to-automatically-update-your-docker-containers-if-base-images-are-updated
# docker inspect -f "{{ .RestartCount }}" my-container
. to get the last time the container was (re)started
# docker inspect -f "{{ .State.StartedAt }}" my-container
# docker
.wowo
# docker
.wowo
# docker
.wowo
# docker
.wowo
# docker
.wowo
# docker
This idea is mind blowing. I think everyone should know such information like you have described on this post. Thank you for sharing this explanation.yours blog was excellent and really enjoyed.Thanks for sharing and mainting blogging
ReplyDeleteoracle training in chennai
oracle training institute in chennai
oracle training in bangalore
oracle training in hyderabad
oracle training
oracle online training
hadoop training in chennai
hadoop training in bangalore