Friday, May 20, 2016

python


python install vs pip
On the surface, both do the same thing: doing either python setup.py install or pip install <PACKAGE-NAME> will install your python package for you, with a minimum amount of fuss.
However, using pip offers some additional advantages that make it much nicer to use.
  • pip will automatically download all dependencies for a package for you. In contrast, if you use setup.py, you often have to manually search out and download dependencies, which is tedious and can become frustrating.
  • pip keeps track of various metadata that lets you easily uninstall and update packages with a single command: pip uninstall <PACKAGE-NAME> and pip install --upgrade <PACKAGE-NAME>. In contrast, if you install a package using setup.py, you have to manually delete and maintain a package by hand if you want to get rid of it, which could be potentially error-prone.
  • You no longer have to manually download your files. If you use setup.py, you have to visit the library's website, figure out where to download it, extract the file, run setup.py... In contrast, pip will automatically search the Python Package Index (PyPi) to see if the package exists there, and will automatically download, extract, and install the package for you. With a few exceptions, almost every single genuinely useful Python library can be found on PyPi.
  • pip will let you easily install wheels, which is the new standard of Python distribution. More info about wheels.
  • pip offers additional benefits that integrate well with using virtualenv, which is a program that lets you run multiple projects that require conflicting libraries and Python versions on your computer. More info.
  • pip is bundled by default with Python as of Python 2.7.9 on the Python 2.x series, and as of Python 3.4.0 on the Python 3.x series, making it even easier to use.
  • pip show -f <package>  # list installed file of package
  • pip list [options]    # list installed packages
    -o--outdated
    -u--uptodate
    -e--editable
    -l--local
  • pip install
  • pip download
  • pip uninstall
  • pip freeze
  • pip search
  • pip wheel
  • pip hash

Tuesday, May 17, 2016

Basic SELinux Troubleshooting in CLI

SELinux isolates all processes running on the system to mitigate attacks which take advantage of privilege escalation. Privilege escalation means that a process gains more access rights than it should have.
To prevent this, SELinux enforces Mandatory Access Control (MAC) mechanism over all processes. It labels every process, file, or directory according to rules specified in a security policy known as the SELinux policy.
The SELinux policy also specifies how processes interact with each other and how they can access files and directories. SELinux denies every action that it is not explicitly allowed by the SELinux policy.
The most common causes why SELinux denies an action are:
  • processes, files, or directories are labeled with incorrect SELinux context
  • confined processes are configured in a different way than what is expected by the default SELinux policy
  • there is a bug in the SELinux policy or in an application

Troubleshooting SELinux AVC Messages on the Command Line

When SELinux denies an action, an Access Vector Cache (AVC) message is logged to the /var/log/audit/audit.log and /var/log/messages files or thejournald daemon logs it. If you suspect that SELinux denied an action that you attempted to do, follow these basic troubleshooting steps:
  1. Use the ausearch utility to find any recent AVC messages and confirm that SELinux denies the action:
    # ausearch -m AVC,USER_AVC -ts recent
    time->Thu Feb 18 14:24:24 2016
    type=AVC msg=audit(1455805464.059:137): avc:  denied  { append } for  pid=861 comm="httpd" name="error_log" dev="sdb1" ino=20747 scontext=system_u:system_r:httpd_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=file permissive=0
    
    The -m option specifies what kind of information ausearch returns.The -ts option specifies the time stamp. For example -ts recent returns AVC messages from the last 10 minutes or -ts today returns messages from the whole day.
  2. Use the journalctl utility to view more information about the AVC message:
    # journalctl -t setroubleshoot --since= [time]
    
    Replace [time] with the time from the AVC message found in the first step. In this example, SELinux prevented the httpd process from accessing the/var/log/httpd/error_log file:
        # journalctl -t setroubleshoot --since=14:20
    -- Logs begin at Fri 2016-01-15 01:17:17 UTC, end at Thu 2016-02-18 14:25:21 UTC. --
    Feb 18 14:24:24 fedora.23.virt setroubleshoot[866]: SELinux is preventing httpd from append access on the file error_log. For complete SELinux messages. run sealert -l e9d8fa2e-3608-4ffa-9e72-31a1b85e460b
    
  3. Use the sealert utility to further inspect the AVC message:
    # sealert -l [message_ID]
    
    Replace [message_ID] with the number of the AVC message. The output will look similarly as in the examples below:
    • In this example, SELinux prevented the httpd process from accessing the /var/log/httpd/error_log file because it was incorrectly labeled with the var_log_t SELinux type:
      # sealert -l e9d8fa2e-3608-4ffa-9e72-31a1b85e460b
          SELinux is preventing httpd from open access on the file /var/log/httpd/error_log.
      
      ***** Plugin restorecon (99.5 confidence) suggests   **************************
      
      If you want to fix the label.
          /var/log/httpd/error.log default label should be httpd_log_t.
          Then you can run restorecon.
          Do
          # /sbin/restorecon -v /var/log/httpd/error_log
      
      [trimmed for clarity]
      
    • In this example, SELinux prevented the plugin-containe process from connecting to the network using the TCP protocol and from using the Bluejeans service because the mozilla_plugin_can_network_connect and mozilla_plugin_use_bluejeans Booleans were not enabled:
      # sealert -l fc46b9d4-e5a1-4738-95a7-26616d0858b0
      SELinux is preventing plugin-containe from name_connect access on the tcp_socket port 5000.
      
      *****  Plugin catchall_boolean (9.19 confidence) suggests   ******************
      
      If you want to allow mozilla plugin domain to connect to the network using TCP.
      Then you must tell SELinux about this by enabling the 'mozilla_plugin_can_network_connect' boolean.
      You can read 'mozilla_selinux' man page for more details.
      Do
      setsebool -P mozilla_plugin_can_network_connect 1
      
      *****  Plugin catchall_boolean (9.19 confidence) suggests   ******************
      
      If you want to allow mozilla plugin to use Bluejeans.
      Then you must tell SELinux about this by enabling the 'mozilla_plugin_use_bluejeans' boolean.
      You can read 'mozilla_selinux' man page for more details.
      Do
      setsebool -P mozilla_plugin_use_bluejeans 1
      
      [trimmed for clarity]
      
    • In this example, SELinux denied the passwd process to access the /home/user/output.txt file because there is no rule in the SELinux policy that allows passwd to write to files labeled with the user_home_t SELinux type:
      # sealert -l 1dd524dd-1784-44ef-b6d1-fff9238ed927
      
      SELinux is preventing passwd from write access on the file /home/user/output.txt.
      
      *****  Plugin catchall (100. confidence) suggests   **************************
      
      If you believe that passwd should be allowed write access on the output.txt file by default.
      Then you should report this as a bug.
      You can generate a local policy module to allow this access.
      Do
      allow this access for now by executing:
      # grep passwd /var/log/audit/audit.log | audit2allow -M mypol
      # semodule -i mypol.pp
      
      [trimmed for clarity]
      
  4. Perform actions according to suggestions provided by sealert. For example, use the restorecon utility to fix incorrectly labeled files or enable particular Booleans. If there is no suitable hint provided by sealert or you are not sure how to implement the suggestions, contact our support. If you believe that there is a bug in the SELinux policy, report a bug.
  5. Repeat the action you attempted to do before SELinux denied it. If SELinux is still preventing the action, report a bug.

Additional Information:

Monday, May 16, 2016

Docker




LXC vs LXD vs Docker

Linux Container technologies:
  • LXC
  • OpenVZ
  • Linux-VServer
  • FreeBSD jail
  • Solaris Zones
------------
  • Docker specializes in deploying apps (it encapsulates an app and its identity)
  • LXD specializes in deploying (Linux) Virtual Machines (it acts like a Linux virtual machine)


Accroding to RedHatContainers using the libvirt-lxc tooling have been deprecated from RHEL7.1. Linux containers framework is now based on the docker command-line interface.
Linux containers are essentially the clone() system call, SELinux, and Cgroups; whether they are of a LXC or Docker (through libcontainer) type is mostly irrelevant as the Linux kernel itself “does” the process isolation.

--------------------
Docker Howto:
Tutorials:

Docker Commands

Here is a summary of currently available (version 0.7.1) docker commands:
attach: Attach to a running container
build:  Build a container from a Dockerfile
commit: Create a new image from a container's changes
cp:     Copy files/folders from the containers filesystem to the host path
diff:       Inspect changes on a container's filesystem
events: Get real time events from the server
export: Stream the contents of a container as a tar archive
history:    Show the history of an image
images: List images
import: Create a new filesystem image from the contents of a tarball
info:   Display system-wide information
insert: Insert a file in an image
inspect:    Return low-level information on a container
kill:       Kill a running container
load:   Load an image from a tar archive
login:  Register or Login to the docker registry server
logs:   Fetch the logs of a container
port:   Lookup the public-facing port which is NAT-ed to PRIVATE_PORT
ps:     List containers
pull:       Pull an image or a repository from the docker registry server
push:   Push an image or a repository to the docker registry server
restart:    Restart a running container
rm:     Remove one or more containers
rmi:        Remove one or more images
run:        Run a command in a new container
save:   Save an image to a tar archive
search: Search for an image in the docker index
start:  Start a stopped container
stop:   Stop a running container
tag:        Tag an image into a repository
top:        Lookup the running processes of a container
version:    Show the docker version information
docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
-aattach to STDIN
-drun container in background
-hcontainer hostname
-i, --interactivekeep STDIN open even if not attached
--ip=""container ipv4 address
--link=[]add link to another container
--name=""Assign a name to the container
-P, --publish-allPublish all exposed ports to random ports
-p, --publish=[]publish a container's ports to the host
--rmAutomatically remove the container when it exists
-u, --user="<name|uid>[:<group|gid>]"username
-v, --volume=[host-src:]container-dest[:<options>]Bind mount a volume. The comma-delimited `options` are [rw|ro], [z|Z], [[r]shared|[r]slave|[r]private], and [nocopy]. The 'host-src' is an absolute path or a name value.
--volumes-from=[]Mount volumes from the specified container(s)
-w, --workdir=""Working directory inside the container
The -p flag can take a few different formats:
ip:hostPort:containerPort| ip::containerPort | hostPort:containerPort | containerPort
Essentially, you can omit either ip or hostPort, but you must always specify a containerPort to expose. Docker will automatically provide an ip and hostPort if they are omitted. Additionally, all of these publishing rules will default to tcp. If you need udp, simply tack it on to the end such as -p 1234:1234/udp.
============
Docker images are store under /var/lib/docker directory vary depending on the driver Docker is using for storage In most places this will be aufs but the RedHats went with devicemapperYou can manually set the storage driver with the -s or --storage-driver= option to the Docker daemon.
  • /var/lib/docker/{driver-name} will contain the driver specific storage for contents of the images.
  • /var/lib/docker/graph/<id> now only contains metadata about the image, in the json and layersize files.
In the case of aufs:
  • /var/lib/docker/aufs/diff/<id> has the file contents of the images.
  • /var/lib/docker/repositories-aufs is a JSON file containing local image information. This can be viewed with the command docker images.
In the case of devicemapper:
  • /var/lib/docker/devicemapper/devicemapper/data stores the images
  • /var/lib/docker/devicemapper/devicemapper/metadata the metadata
  • Note these files are thin provisioned "sparse" files so aren't as big as they seem.
The images are stored in /var/lib/docker/graph/<id>/layer.
Note that images are just diffs from the parent image. The parent ID is stored with the image's metadata /var/lib/docker/graph/<id>/json.
When you docker run an image. AUFS will 'merge' all layers into one usable file system.

-------
* Increase the storage disk for one container which defaults to 10G
* Increase the total data space used by docker on your platform, which defaults to (type ‘docker info’): Data Space Total: 107.4 GB

# docker info
 ...
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Total: 107.4 GB
 ...
 Data loop file: /home/docker/devicemapper/devicemapper/data
 Metadata loop file: /home/docker/devicemapper/devicemapper/metadata
Modify docker config file "/etc/sysconfig/docker" option:
OPTIONS='--selinux-enabled=false --storage-opt dm.no_warn_on_loop_devices=true –storage-opt dm.basesize=400G -g /home/docker'
Create 400G data file:
dd if=/dev/zero of=/home/docker/devicemapper/devicemapper/data bs=1G count=0 seek=400
Docker default ip is 172.17.42.1/16 assign to virtual interface docker0But docker0 is no ordinary interface. It is a virtual Ethernet bridge that automatically forwards packets between any other network interfaces that are attached to it. This lets containers communicate both with the host machine and with each other. Every time Docker creates a container, it creates a pair of “peer” interfaces that are like opposite ends of a pipe — a packet sent on one will be received on the other. It gives one of the peers to the container to become its eth0interface and keeps the other peer, with a unique name likevethAQI2QT, out in the namespace of the host machine. By binding every veth* interface to the docker0 bridge, Docker creates a virtual subnet shared between the host machine and every Docker container.

Docker is based on so called images. These images are comparable to virtual machine images and contain files, configurations and installed programs. And just like virtual machine images you can start instances of them. A running instance of an image is called container. You can make changes to a container (e.g. delete a file), but these changes will not affect the image. However, you can create a new image from a running container (and all it changes) using docker commit <container-id> <new-image-name>Export is used to persist a container, and save is used to persist a image. exportwill give you a flat .tar archive containing your container filesystem, all the metadata will be lost, so in case you try to run the container with that image you have remention the CMD and other metdata. (Export-Import creates new Container where as the Save-Load shows time of the original file created.)
docker save and docker load will preserve image metadata (CMD, ENTRYPOINT, etc) and all layers.docker export and docker import don't preserve metadata. This is by design and it's not being changed.
docker export does not export everything about the container — just the filesystem. So, when importing the dump back into a new docker image, additional flags need to be specified to recreate the context.
docker export - saves a container’s running or paused instance to a file
docker save - saves a non-running container image to a file
sudo docker export <CONTAINER ID> > /home/export.tar
sudo docker save <image> > /home/save.tar

In rhel 7, create static route file:
# cat /etc/sysconfig/network-scripts/route-em2
172.17.117.0/24 via 172.16.131.1 dev em2
Then the docker interface ip become 172.18.0.1/16

- yum install docker
  yum install -y docker
  systemctl disable firewalld && systemctl stop firewalld
  systemctl enable docker && systemctl start docker
  docker version
  docker info
  . to install a CentOS 7 distribution
  docker pull centos:centos7
  . To display the list of locally available images
  docker images
  . Show all containers (default shows just running)
  docker ps -a
  . To test your new image
  docker run centos:centos7 /bin/ping google.com -c 2
  . create a container
  mkdir -p /var/www/html
  restorecon -R /var/www
  # docker run -d -p 8000:8000 --name="python_web" -v /usr/sbin:/usr/sbin -v /usr/bin:/usr/bin -v /usr/lib64:/usr/lib64 -w /var/www/html -v /var/www/html:/var/www/html centos:centos7 /bin/python -m SimpleHTTPServer 8000
  # netstat -tupln | grep 8000
  . stop/remove all docker containers (in bash)
  $ docker stop $(docker ps -a -q)
  $ docker rm $(docker ps -a -q)
  . clean up old containers
  docker ps -a | grep 'weeks ago' | awk '{print $1}' | xargs --no-run-if-empty docker rm 
  . copy docker image to another machine
  # docker save -o <save image to path> <image name>
  Or # docker save <image name> > saved.tar
  # docker load -i <path to image tar file>
  Or # docker save <image> | bzip2 | ssh user@host 'bunzip2 | docker load'
  Or # docker save <image> | gzip -9 -c > image.tgz
       # gunzip -c image.tgz | docker load
  . start a stopped container
  # docker start -ai 665b4a1e17b6 # by ID
  docker start -ai mad_brattain # by name
  . attach to a running container
  # docker attach 665b4a1e17b6
  or to different shell


  docker exec -i -t 665b4a1e17b6 /bin/bash


  . script to auto upgrade docker images http://stackoverflow.com/questions/26423515/how-to-automatically-update-your-docker-containers-if-base-images-are-updated




  .  to get the number of restarts for container “my-container”
  docker inspect -f "{{ .RestartCount }}" my-container

  . to get the last time the container was (re)started
 docker inspect -f "{{ .State.StartedAt }}" my-container
  .wowo
 docker
  .wowo
 docker
  .wowo
 docker
  .wowo
 docker
  .wowo
 docker
  .wowo
 docker