Wednesday, November 16, 2011

Xen tasks

http://docs.vmd.citrix.com/XenServer/5.6.0/1.0/en_gb/reference.html
http://docs.vmd.citrix.com/XenServer/5.0.0/1.0/en_gb/
http://docs.vmd.citrix.com/XenServer/4.0.1/reference/reference.html


In XenServer a Storage Repository is a storage target that contains virtual disks (VDIs) and isos.
A PBD (physical device block) is what they call the interface between a physical host and an attached SR, which is responsible for storing the device config that allows the host to interact with the storage target.
Recover a Deleted XenServer Virtual Machine http://support.citrix.com/article/ctx117508


Add local storage
#: fdisk -l (or gdisk -l if using GPT (Globally Unique Identifier (GUID) Partition Table))
#: pvcreate /dev/sdb
sdb is my new volume. and then to configure it as a local storage :
#: xe sr-create type=lvm content-type=user device-config:device=/dev/disk/by-id/scsi-SATA_ST3320620AS_5QF7QZZL name-label=”LOCAL SR”
add option "sm-config:allocation=thin" to support thin provisioning
Confirm the new SR is created:
#: xe sr-list host=xen01-man
(you do not actually create a partition on your disk before turning it into a storage repository)
The device name can be found by ls -l /dev/disk/by-id or with "xe pbd-list"

Delete local storage
1. First, you have to determine the Storage-Repository-UUID:
# xe sr-list or xe sr-list name-label=Local\ storage
-> write down / take note of SR-UUID of the SR to delete
2. Find the corresponding Physical Block Device (PBD):
# xe pbd-list sr-uuid=your-SR-uuid-> write down / take note of PBD-UUID of the PBD to unplug and delete
3. Unplug the PBD:
# xe pbd-unplug uuid=your-PBD-uuid
4. Delete PBD:
# xe pbd-destroy uuid=your-PBD-uuid
5. Delete the association of your SR and the PBD:
# xe sr-forget uuid=your-SR-uuid
(
script:
  SRUID=`xe sr-list name-label=Local\ storage --minimal`
  PBDUID=`xe pbd-list sr-uuid=$SRUID --minimal`
  xe pbd-unplug uuid=$PBDUID
  xe pbd-destroy uuid=$PBDUID
  xe sr-forget uuid=$SRUID
)

Delete Orphaned Disks

  1. xe vdi-list
    uuid ( RO)                : cb5781e0-c6f9-4909-acd6-5fd4b509d117
              name-label ( RW): Vista master for UIA
        name-description ( RW): Created by template provisioner
                 sr-uuid ( RO): 72cc0d44-bea7-6c15-cf8d-c2965ca495b2
            virtual-size ( RO): 25769803776
                sharable ( RO): false
               read-only ( RO): false
  2. xe vdi-destory uuid=cb5781e0-c6f9-4909-acd6-5fd4b509d117
  3. xe sr-scan uuid=72cc0d44-bea7-6c15-cf8d-c2965ca495b2

Enable thin provisioning on local storage
warning: this will destroy all vm on the respective storage.
. find local storage repository
# cat /etc/xensource-inventory|grep "DEFAULT_SR_PHYDEVS"
. get UUID of the local SR
# xe sr-list type=lvm
. get UUID of PBD
# xe pbd-list sr-uuid=<sr-uuid>
. disconnect and destroy PBD
# xe pbd-unplug=<pbd-uuid>
# xe sr-destroy uuid=<pbd-uuid>
. create new sr with thin provisioning enable
# xe sr-create host-uuid=<host-uuid> content-type=user type=ext device-config=/dev/sda3 shared=false name-label="Local storage"
or
# xe sr-create host-uuid=<host-uuid> content-type=user type=lvm device-config=/dev/sda3 shared=false name-label="Local storage" sm-config:allocation=thin

XenServer allows us to use two different types of thin provisioned SR’s ext3 and LVHD. LVHD is the new default type of SR since version 5.5 of XenServer. The ext3 type SR is around much longer and as the name implies it is a simple EXT3 linux volume. On this volume the virtual disk of your VMs are stored as VHDs. And yes, you can access these file directly on your XenServer host or via SCP. Just navigate to /var/run/sr-mount/<sr-uui>. The ext-based SR, which has thin provisioning enabled by default. There is a little performance penalty for using thin provisioned storage, as the disk have to be extended during the runtime.

Reattach local storage
add a second disk /dev/sdb from other XenServer that contains a SR with VM or virtual disks.
. find out all physical disks
# pvdisplay
. make the new SR know to XenServer
# xe sr-introduce uuid=<vg-uuid> type=lvm name-label="Local storage 2" content-type=user 
. connect the PBD with SR
(find the device name and host-uuid first)
  # ls -l /dev/disk/by-id
  # xe host-list
# xe pbd-create host-uuid=<host-uuid> sr-uuid=<uuid> device-config:device=/dev/disk/by-id/<device-id>
. active the pdb
# xe pbd-plug uuid=<uuid>

Extend a Virtual Disk
in XenCenter, shutdown the VM. select vm -> Storage -> Properties -> Size and Location -> Size
Or in cli:
. find vdi uuid
# xe vm-disk-list vm=<vm-name>
. extend the disk
# xe vdi-resize disk-size=<new size (GiB|MiB)> uuid=<vdi-uuid>

Then in Windows, extend from Server Manager -> Disk Management -> Extend
Or use DiskPart (in Windows 2008)
  diskpart
  DISKPART> list volume
  DISKPART> select volume <#>
  DISKPART> extend size=n (n in Megabyes)

In Linux ext3, to use resize2fs to resize data disk:
# umount /dev/xvdc1
# fdisk /dev/xvdc1
   d (delete the partition and recreate it)
   n
   w
# e2fsck -f /dev/xvdc1
# resize2fs /dev/xvdc1
# mount /dev/xvdc1 /data

For Linux ext3 system partition:
  1. (check current kernel parameters
    xe vm-list name-label=<vm-name> params=PV-args,uuid )
    shutdown vm
  2. set vm to boot into single-user mode from XenServer host CLI
    (Or from XenCenter 6, VM -> Start/Shut Down -> Start in Recovery Mode )
    # ve vm-param-set uuid=<vm-uuid> PV-args=single
  3. boot vm "xe vm-start" and change partition table of the disk:
    # fdisk -l
    # fdisk /dev/xvda
       d
       n
       p (primary)
       1 (partition number)
        (size)
        w
  4. reboot the vm: reboot vm (may need boot to rescue media and run fsck)
  5. resize the filesystem: resize2fs /dev/xvda1
  6. on XenServer host, remove single-user boot mode setting
    ve vm-param-set uuid=<vm-uuid> PV-args=
    . use xfs_growfs in xfs and resize_reiserfs in reiserfs to grow after resizing
For Linux LVM system partition:
  1. boot to single user mode (init S)
  2. use fdisk to add new partition with Linux LVM type 8e, (e.g. /dev/xvda3)
    or extend the partition size
  3. pvcreate /dev/xvda3 # or pvresize /dev/xvda3
  4. pvscan # find out vg name, e.g. VolGroup
  5. vgextend <VolGroup> /dev/xvda3 # skip this if extending partition
  6. lvextend -L+3.99G /dev/VolGroup/lv_root or
    lvextend -l +100%FREE /dev/VolGroup/lv_root
  7. resize2fs /dev/VolGroup/lv_root
    (vgchange -a y; lvreduce -L ..., lvcreate -L 20G -n logvol_name volgrp_name)

Managing VM from XenServer host
. listing VMs installed and their status
# xe host-vm-list -u root
. list configuration parameters of a VM
# xe vm-param-list -u root vm-name=<vm-name> (or vm-id=<vm-uuid>)
.  start/shutdown/restart/suspend/resume/uninstall vm
# xe vm-start -u root vm-name=<vm-name>

# xe vm-shutdown -u root vm-name=<vm-name>
# xe vm-reboot -u root vm-name=<vm-name>
# xe vm-suspend -u root vm-name=<vm-name>
# xe vm-resume -u root vm-name=<vm-name>
# xe vm-uninstall -u root vm-name=<vm-name>
. clone a vm (make sure the vm is power off first)
# xe vm-clone -u root vm-name=<vm-name> new-name=<new-vm-name> new-description<description>

Backup VM

# xe vm-shutdown uuid=<uuid>
# xe vm-export uuid=<uuid> filename=/backup/vmfile.xva

---
# ls -l /dev/disk/by-uuid
# vgscan
# lvdisplay; vgdisplay; pvdisplay
# xe sr-probe type=local
# pvs
---

Add local ISO Repository
  1. NFS
    • enable nfs server
        - # cat > /etc/exports
           /iso 127.0.0.1(ro,sync,root_squash)
        - chkconfig portmap on; chkconfig nfs on # make sure services is turn on
        - service portmap restart; service nfs restart
    • From XenCenter, add ISO repository through nfs, use 127.0.0.1:/iso
      The default xen partition is only 4G?. mount extra disk space for large ISOs
  2. Create Storage Repository
    Use the uuid returned by the pbd-create command as the input for pbd-plug
    # UUID=$(uuidgen) or set UUID=`uuidgen` (for csh)
    # echo $UUID
    705c23ee-b53d-4cc9-9d0c-e7e395fa0f82
    # xe sr-introduce name-label="devxen02 ISO Library" content-type=iso shared=false \
     type=iso uuid=${UUID}

    705c23ee-b53d-4cc9-9d0c-e7e395fa0f82
    # xe sr-param-set other-config:auto-scan=true uuid=${UUID}
    # xe pbd-create host-uuid=`xe host-list name-label=devxen02-man --minimal` \
     sr-uuid=${UUID} device-config:location="/iso" device-config:options="-o bind"

    c28bfc89-c207-7ef3-2776-50794e610e4e
    # xe pbd-plug uuid=c28bfc89-c207-7ef3-2776-50794e610e4e
    # xe sr-list
    uuid ( RO)                : 705c23ee-b53d-4cc9-9d0c-e7e395fa0f82
              name-label ( RW): devxen02 ISO Library
        name-description ( RW):
                    host ( RO): devxen02-man
                    type ( RO): iso
            content-type ( RO): iso
    
  3. ?. Use /opt/xensource/bin/xe-mount-iso-sr (not working)
Set VM auto start for XenServer 6
. set XenServer to allow auto-start
 # xe pool-list
 # xe pool-param-set uuid=UUID other-config:auto_poweron=true
 . set VM to auto start
 # xe vm-list
 # xe vm-param-set uuid=UUID other-config:auto_poweron=true

Shutdown VM
. get vm uuid
# xe vm-list
. normal shutdown
# xe vm-shutdown uuid=<vm-uuid> force=true
. if failed, reset power state
# xe vm-reset-powerstate uuid=<vm-uuid> force=true
. if failed, need to destroy the domain
# list_domains
# /opt/xensource/debug/xenops destroy_domain -domid <DOMID>

List snapshots
# vhd-util scan -f -m "VHD-*" -l VG_XenStorage-<uuid-of -sr> -p
. map disk back to VM
# xe vbd-list vdi-uuid=<UUID_of_disk>
# lvscan

Boot Linux in Recovery Mode
http://support.citrix.com/article/CTX132039
Unlike Microsoft Windows, which uses device drivers for paravirtualization, Linux virtual machines have a paravirtualized kernel. During the installation of Linux, it is actually running as a Hardware-Assisted Virtual Machine (HVM) and has access to DVD just like Windows. Once the Linux installation is complete, a Xen kernel is swapped in. However, as the paravirtualization is kernel based, this causes issues with accessing the DVD on boot, as the kernel is not loaded.

For XenCenter 5, boot by "Start/Shutdown -> Start in Recovery Mode".
Command line:
xe vm-param-set uuid= HVM-boot-policy=BIOS\ order 
xe vm-param-set uuid= HVM-boot-params:order=dc 
xe vm-start name-label='<vm-name>'
Make sure that you have a cd selected for the vm. When you are done, do: 
xe vm-param-set uuid= HVM-boot-policy= 
This will unset BIOS order so the vm will boot using pygrub again


Fix VM boot sector
From host, run: xe-edit-bootloader -n <vm-name-label> -u <vm-uuid> -p <partition-number>

Setup new/clone linux server
. get CD Depository, tar of the old server
. create new VM
- highlight the new vm, under "Storage" tab, select the created virtual disk, under Properties,
highlight the last entry on the left panel "vm name(Device 0,(Read/Write)". Change the "Device Position" would change the device name "/dev/hda"
- boot with redhat with "linux rescue"
sh-3.2# fdisk -l                 #confirm the device
sh-3.2# fdisk /dev/hda      # create 3 partition 1:100m for /boot, 2:swap, 3:root
                                        # make sure the swap partition is type 82 (linux swap)
sh-3.2# mkfs.ext3 /dev/hda1
sh-3.2# mkfs.ext3 /dev/hda3
sh-3.2# mkdir /mnt/sysimage
sh-3.2# mount /dev/hda3 /mnt/sysimage
sh-3.2# mkdir /mnt/sysimage/boot
sh-3.2# mkdir /dev/hda1 /mnt/sysimage/boot
sh-3.2# cd /mnt/sysimage
sh-3.2# sftp dchen@10.213.66.23             #this is the underline xen
sh-3.2#                                                      #get the system tar file
sh-3.2# tar zxpf <server.tgz>
modify boot/grub/device.map; grub.conf (make sure root=/dev/hda3); etc/fstab; etc/hosts; etc/sysconfig/network; etc/sysconfig/network-script/ifcfg-eth0; etc/resolv,confl etc/ntp.conf
reboot to "linux rescue" mode again, system would find existing linux partition and mount it under /mnt/sysimage
sh-3.2# chroot /mnt/sysimage
sh-3.2# grub-install /dev/hda
sh-3.2# chkconfig network onThen force reboot in citrix
- make sure sub directory of /var exist. Many services depend on like /var/log /var/run /var/log/httpd
- remove packages: "hp-snmp-agents hpacucli hp-health"
- install package: tcsh, OpenIPMI, OpenIPMI-libs
- may want to run kudzu "harware discovery utility" once
- services may turn off:
  acpid; atd; auditd (needed for selinux); anacron; autofs;
  avahi-daemon: service discovery via mDNS;
  bluetooth; cpuspeed; cups; firstboot; gpm; haldaemon; hidd; isdn;
irqbalance: needed for multi-cpu;
  kudzu: hardware probe;
  lm_sensors: motherboard sensor monitoring;
  lvm2-monitor; mdmonitor;
  messagebus: broadcasts notification of system events;
  microcode_ctl;
  pcscd: pc smart card;
  netfs; nfslock; portmap; rpcbind; rpcgssd; rpcidmapd: for NFS
  yum-updatesd;

VM don't start
- Storage Repositry (SR) is not available, in "broken" state, "Repair" first.
- find the broken Physical Block Device (PBD) storage:
  # xe pbd-list currently-attached=false
- check logs: /var/log/xensource.log*; SMlog*; messages*
  1. check if PBD unplugged
    # grep "PBD.plug" xensource.log
    # xe pbd-list host-uuid=<host-uuid> sr-uuid=<sr-uuid>
    # xe pbd-plug uuid=<uuid>
    # xe sr-list name-label="My SR" params=uuid
  2. LUN/HDD → PV → VG (SR) → LV (VDI)
    • find device SCSI ID
      # xe pbd-list params=device-config sr-uuid=<sr-uuid>
    • is LUN empty
      # udevinfo -q all -n /dev/disk/by-id/<...>
  3. VG name generated from SR  uuid (+ prefix)
    VG created on PV, the returned uuid should match
    # pvs | grep <scsi id>
    # xe sr-list name-label="My SR" params=uuid
  4. LV name generated from VDI uuid (+ prefix)
    these two return should match:
    # cat /etc/lvm/backup/VG... | grep VHD
    # xe vdi-list sr=<uuid> params=uuid
  5. Displaying VG (vgs), PV (pvs), LV (lvs)
    # vgs 'VG_XenStorage-<sr-uuid>
    # pvs
    # lvs
  6. Addressing block devices (/dev/disk)
    # ls -lR /dev/disk | grep <scsi id>
    check /dev/disk/by-id; /dev/disk/by-scsibus; /dev/disk/by-path
  7. Examining HDD/LUN with "hdparm -t"
    # hdparm -t /dev/disk/by-id/scsi-<scsi id> # got ioctl error
  8. Restoring PV & VG from backup
    LVM metadata backup:
    /etc/lmv/backup/VG_XenStorage-<sr uuid>
    • remove old VG & PV
      # vgremove "VG_XenStorage-<sr-uuid>
      # pvremove /dev/mapper/<scsi id>
    • recreating PV & VG
      # pvcreate -uuid <PV uuid from backup file> --restorefile /etc/lvm/backup/VG_XenStorage-<sr-uuid>
      # vgcfgrestore VG_XenStorage-<sr-uuid>
    • confirm VG_XenStorage<uuid> matches <sr-uuid>
      # pvs
      # xe sr-list name-label="My SR" params=uuid
    • plugging PBD back
      # xe pbd-plug uuid=<...>
      # xe sr-scan uuid=<...>
      Error code: SR_BACKEND_FAILURE_46
      Error parameters: , The VDI is not available [opterr=Error scanning VDI 7e5f83a7-b6c4-4fae-9899-1e6a2cdabd32]
      # xe vdi-list uuid=<above uuid>
      # lvremove /dev/VG_XenStorage-<uuid>/<uuid>
      # xe sr-scan uuid=<...>
1. xe task-list to view the Pending tasks
2. xe task-cancel force=true uuid= to cancel a specific task

Replace a Network Interface Card
http://support.citrix.com/article/CTX127161
The following procedure describes the steps required to replace the management interface. If the NIC you are replacing is not the management interface you can skip steps 2 and 8. Step 7 is only required if you need to assign an IP address to the PIF.
1. Shutdown all the virtual machines on the host using XenCenter or from the console:
xe vm-shutdown uuid=<vm_uuid>

If cannot shutdown, go back the the console GUI to shutdown there

2. Disable the management interface using the host-management-disable command:
xe host-management-disable
3. Remove the PIF records of the NICs being replaced using the pif-forget command:
xe pif-forget uuid=<pif_uuid>

The VM using the pid need to be down before forget

(4. Shutdown the host and replace the NIC:
xe host-disable host=<host_name> )
5. Re-introduce the devices using the pif-introduce command:
xe pif-introduce device=<device_name> host-uud=<host-uuid> mac=<mac_address>

Example:
xe pif-introduce device=eth0 host-uuid=49e1cde9-a948-4f8f-86d5-3b681995566a mac=00:19:bb:2d:7e:7a
6. Verify the new configuration using the pif-list command:
xe pif-list params=uuid,device,MAC
(7. Reset the interface address configuration using the pif-reconfigure-ip command:
xe pif-reconfigure-ip uuid=<pif_uuid> mode=static IP=<ip_address> netmask=<netmask> gateway=<gateway> DNS=<dns>
8. Reset the host management interface using the host management-reconfigure command:
xe host-management-reconfigure pif-uuid=<pif_uuid> )
step 7,8 can be done in console UI. May need reboot twice.

Other commands
xe host-list
xe vm-list
xe pif-list
xe vif-list

Change host name
xe host-set-hostname-live host-uuid=<host_uuid> host-name=<name>
xe host-param-set name-label=<name> uuid=<host-uuid>
xe host-set-hostname-live host-uuid=xxx hostname=xxx

Remove VM Interface
a) xe vm-vif-list <vm-selectors>
b) xe vif-unplug uuid=<vif-uuid> force=true
c) xe vif-destroy uuid=<vif-uuid>

Add a Permanent Search Domain Entry in the Resolv.conf

Use the following steps to complete the task:



  1. To identify the management NIC on a given host run, use the following command:
    xe pif-list host-name-label=<hostname of the xenserver to modify> management=true
     


  2. To add the static search domain entry to the resolv.conf:
    xe pif-param-set uuid=<pif uuid> other-config:domain=searchdomain.net
    Multiple search domain entries can be entered separated by commas:
    xe pif-param-set uuid=<pif uuid> other-config:domain=searchdomain.net,searchdomain2.net


  3. Reboot the XenServer Host.


  4. cat /etc/resolv.conf should now show the search domain entry. 
Leave pool
After the system join a pool, something messed up.

# xe pool-recover-slaves
The host is its own slave. Please use pool-emergency-transition-to-master or pool-emergency-reset-master.
# xe pool-emergency-transition-to-master
Host agent will restart and transition to master in 10 seconds...
# xe host-list
uuid ( RO)                : f4457ca9-4526-42ec-af53-742e86475f9a
          name-label ( RW): devxen02-man
    name-description ( RW): Default install of XenServer
Enable autostart for XenServer 6.0
Re-enable the VM autostart feature for the pool object:
xe pool-param-set uuid=UUID_OF_THE_POOL other-config:auto_poweron=true

Set the following parameter for VMs you wish to have automatically starting:
xe vm-param-set uuid=UUID_OF_THE_POOL other-config:auto_poweron=true

Find orphaned vdi
ls -alh on the sr mount point,
# xe vbd-list vdi-uuid=<disk uuid> params=all
find out which VM the disk has is associated with. If the name-label of the VDI is "base copy", the disk is the original of the linked snapshot disks.
xe host-call-plugin host-uuid=<uuid> plugin=coalesce-leaf fn=leaf-coalesce args:vm_uuid=<uuid>

  • xe host-list params=all
  • List of VMs on XenServer (with UUIDs)
    # xe vm-list params=all| \
    awk '{if ( $0 ~ /name-label/) {$1=$2=$3=""; vmname=$0} \
    if ($0 ~ /affinity.*\:\ [a-e,0-9]/) {host=$4; printf "%s \n", vmname}}'

    # xe vm-list | awk '{if ( $0 ~ /uuid/) {uuid=$5} if ($0 ~ /name-label/) {$1=$2=$3="";vmname=$0; printf "%45s - %s\n", vmname, uuid}}'
  • List virtual machines: xe vm-list
  • Get uuid of all running VMs:
    xe vm-list is-control-domain=false power-state=running params=uuid | egrep -o "[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}"
  • Shutdown VM: xe vm-shutdown --force vm=<uuid>
    • If does not work, find out the pending task queue: xe task-list
    • Cancel the process that hold up the system:
      xe task-cancel uuid=<task uuid>
    • If still failed: xe-toolstack-restart
    • Yet still failed, find the domain number: list_domains | grep <vm-uuid>
      then: /opt/xensource/debug/xenops destroy_domain –domid <domain-number>
  • Suspend VM: xe vm-suspend vm=<uuid>
  • List all the parameters available on the selected host:
    xe vm-param-list uuid=<uuid>


CPUs
Set the number of cores with:
xe vm-param-s<wbr/>et platform:c<wbr/>ores-per-soc<wbr/>ket=4 uuid=xxxxx<wbr/>x
Set the number of CPUS at startup:
xe vm-param-s<wbr/>et VCPUs-at-s<wbr/>tartup=8 uuid=xxxxx<wbr/>x
Set the max number of CPUS:
xe vm-param-s<wbr/>et VCPUs-max=<wbr/>8 uuid=xxxxx<wbr/>xx
Hosts
List hosts
xe host-list
Shutdown host
xe host-shutdown host=<uuid>
Remove Host from Pool
xe host-forget uuid=<toasted_host_uuid>
Get Pool Master UUID
xe pool-list params=master | egrep -o "[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}"
Eject host from pool
xe pool-eject host-uuid=9712025f-9b98-4c25-81ef-9222993b71f9
Get VMs running on specified host
xe vm-list resident-on=<host uuid=""> is-control-domain=false
Pending tasks:
xe task-list #to view the Pending tasks
xe task-cancel force=true uuid=<UUID> #to cancel a specific task
Last resort:
xe-toolstack-restart
Networking
Lists networks
xe network-list
Lists Physical Network Cards with specified MAC Address
xe pif-list MAC=1c:c1:de:6b:9f:22
Create a new Network
xe network-create name-label=VLAN_DMZ
Assign a network to a Physical Network Card with a VLAN
xe vlan-create network-uuid=329b55d1-0f77-512a-63ed-8b6bcf429e99 pif-uuid=80c1ea1a-4beb-c1ee-f69d-14e3a699587e vlan=205
Backups
Export VM or snapshot to XVA
xe vm-export vm=<uuid_or_name> filename=/backup/Ubuntu_backup.xva
Import XVA file
xe vm-import vm=<name> filename=/backup/Ubuntu_backup.xva
Create a snapshot
xe vm-snapshot vm="<vm_name>" new-name-label="snapshot_name"
Convert snapshot to template
xe snapshot-copy uuid=<snapshot_uuid> sr-uuid=<sr_uuid> new-name-description="Description" new-name-label="Template Name"



http://wiki.xensource.com/xenwiki/Command_Line_Interface
convert centos to Xen VM
http://forums.citrix.com/thread.jspa?threadID=265091
backup live XenServer
http://www.computerforums.org/forums/server-articles/how-setup-live-xenservers-backups-208663.html
fast-clone VM
http://virtualfuture.info/2010/10/xenserver-fast-clone-a-vm-120-times-in-90-seconds/
Zero-Downtime Limited-Space Backup and Restore
http://community.citrix.com/display/xs/Zero-Downtime+Limited-Space+Backup+and+Restore
http://community.citrix.com/display/xs/XenServer+Final+Backup+Script+with+or+without+snapshot
http://www.charleslabri.com/back-up-xenserver-6-to-network-share-with-fancy-scripting-and-lots-of-fun-and-no-downtime/
** http://blog.andyburton.co.uk/index.php/2009-11/updated-citrix-xenserver-5-5-automatic-vm-backup-scripts/
http://www.8layer8.com/?p=260

1 comment:

  1. Thanks for sharing this information withus discover and buy cisco network routers at ORM Systems. Explore switches, access points, routers, firewalls, and accessories for reliable networking solutions. Get the best deals on your purchase with a 3-year warranty and save up to 80%.

    ReplyDelete