Example Assigning Host USB device to a Guest VM This example is based on qemu-kvm (0.15.0) as installed in Fedora 15. Will first show how to do this manually, and second how to do it using the virt-manager tool. This HOWTO is limited to UHCI devices (no USB2 EHCI).
Here we'll use a phone attached to the host: # lsusb. Bus 002 Device 003: ID 18d1:4e11 Google Inc. Nexus One (Note the Bus and device numbers). Manually, using qemu-kvm command line #/usr/bin/qemu-kvm -m 1024 -name f15 -drive file=/images/f15.img,if=virtio -usb -device usb-host,hostbus=2,hostaddr=3 Here we add the -usb to add a host controller, and add -device usb-host,hostbus=2,hostaddr=3 to add the host's USB device at Bus 2, Device 3.
Simple as that. Now, we can verify this in the guest: $ lspci. 00:01.2 USB Controller: Intel Corporation 82371SB PIIX3 USB Natoma/Triton II (rev 01) $ lsusb Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 002: ID 18d1:4e11 Google Inc. Nexus One Phone And on the phone enable USB Mass Storage, and the guest should display a dialog seeing a new USB filesystem. $ ls /media/mountpoint Android/ data/ DCIM/. Managed, using virt-manager This assumes you already have created a VM using virt-manager.
The VM is not running, and you'd like to add a USB host device to the VM. Start virt-manager, and open your VM by double clicking on it. Click the virtual hardware details (lightbulb).
Now Click Add Hardware, and Choose USB Host Device. Here, we are choosing the same Phone device. Start the VM and verify the usb host controller and device show up like above.
Now on the phone enable USB Mass Storage, and the guest should display a dialog seeing a new USB filesystem.
. the IDE controller, has a design which goes back to the 1984 PC/AT disk controller.
Even if this controller has been superseded by recent designs, each and every OS you can think of has support for it, making it a great choice if you want to run an OS released before 2003. You can connect up to 4 devices on this controller. the SATA (Serial ATA) controller, dating from 2003, has a more modern design, allowing higher throughput and a greater number of devices to be connected. You can connect up to 6 devices on this controller. the SCSI controller, designed in 1985, is commonly found on server grade hardware, and can connect up to 14 storage devices. Proxmox VE emulates by default a LSI 53C895A controller. A SCSI controller of type VirtIO SCSI is the recommended setting if you aim for performance and is automatically selected for newly created Linux VMs since Proxmox VE 4.3.
Linux distributions have support for this controller since 2012, and FreeBSD since 2014. For Windows OSes, you need to provide an extra iso containing the drivers during the installation.
If you aim at maximum performance, you can select a SCSI controller of type VirtIO SCSI single which will allow you to select the IO Thread option. When selecting VirtIO SCSI single Qemu will create a new controller for each disk, instead of adding all disks to the same controller. The VirtIO Block controller, often just called VirtIO or virtio-blk, is an older type of paravirtualized controller. It has been superseded by the VirtIO SCSI Controller, in terms of features. the QEMU image format is a copy on write format which allows snapshots, and thin provisioning of the disk image. the raw disk image is a bit-to-bit image of a hard disk, similar to what you would get when executing the dd command on a block device in Linux. This format does not support thin provisioning or snapshots by itself, requiring cooperation from the storage layer for these tasks.
It may, however, be up to 10% faster than the QEMU image format. See this benchmark for details. the VMware image format only makes sense if you intend to import/export the disk image to other hypervisors. It is perfectly safe if the overall number of cores of all your VMs is greater than the number of cores on the server (e.g., 4 VMs with each 4 cores on a machine with only 8 cores). In that case the host system will balance the Qemu execution threads between your server cores, just like if you were running a standard multithreaded application. However, Proxmox VE will prevent you from assigning more virtual CPU cores than physically available, as this will only bring the performance down due to the cost of context switches.
In addition to the number of virtual cores, you can configure how much resources a VM can get in relation to the host CPU time and also in relation to other VMs. With the cpulimit (“Host CPU Time”) option you can limit how much CPU time the whole VM can use on the host. It is a floating point value representing CPU time in percent, so 1.0 is equal to 100%, 2.5 to 250% and so on. If a single process would fully use one single core it would have 100% CPU Time usage. If a VM with four cores utilizes all its cores fully it would theoretically use 400%. In reality the usage may be even a bit higher as Qemu can have additional threads for VM peripherals besides the vCPU core ones.
This setting can be useful if a VM should have multiple vCPUs, as it runs a few processes in parallel, but the VM as a whole should not be able to run all vCPUs at 100% at the same time. Using a specific example: lets say we have a VM which would profit from having 8 vCPUs, but at no time all of those 8 cores should run at full load - as this would make the server so overloaded that other VMs and CTs would get to less CPU. So, we set the cpulimit limit to 4.0 (=400%). If all cores do the same heavy work they would all get 50% of a real host cores CPU time. But, if only 4 would do work they could still get almost 100% of a real core each.
Qemu can emulate a number different of CPU types from 486 to the latest Xeon processors. Each new processor generation adds new features, like hardware assisted 3d rendering, random number generation, memory protection, etc Usually you should select for your VM a processor type which closely matches the CPU of the host system, as it means that the host CPU features (also called CPU flags ) will be available in your VMs. If you want an exact match, you can set the CPU type to host in which case the VM will have exactly the same CPU flags as your host system. You can also optionally emulate a NUMA architecture in your VMs. The basics of the NUMA architecture mean that instead of having a global memory pool available to all your cores, the memory is spread into local banks close to each socket. This can bring speed improvements as the memory bus is not a bottleneck anymore.
If your system has a NUMA architecture if the command numactl -hardware grep available returns more than one node, then your host system has a NUMA architecture we recommend to activate the option, as this will allow proper distribution of the VM resources on the host system. This option is also required to hot-plug cores or RAM in a VM.
When multiple VMs use the autoallocate facility, it is possible to set a Shares coefficient which indicates the relative amount of the free host memory that each VM should take. Suppose for instance you have four VMs, three of them running a HTTP server and the last one is a database server. To cache more database blocks in the database server RAM, you would like to prioritize the database VM when spare RAM is available. For this you assign a Shares property of 3000 to the database VM, leaving the other VMs to the Shares default setting of 1000. The host server has 32GB of RAM, and is currently using 16GB, leaving 32.
80/100 - 16 = 9GB RAM to be allocated to the VMs. The database VM will get 9. 3000 / (3000 + 1000 + 1000 + 1000) = 4.5 GB extra RAM and each HTTP server will get 1/5 GB. Intel E1000 is the default, and emulates an Intel Gigabit network card. the VirtIO paravirtualized NIC should be used if you aim for maximum performance.
Like all VirtIO devices, the guest OS should have the proper driver installed. the Realtek 8139 emulates an older 100 MB/s network card, and should only be used when emulating older operating systems ( released before 2002 ). the vmxnet3 is another paravirtualized device, which should only be used when importing a VM from another hypervisor. in the default Bridged mode each virtual NIC is backed on the host by a tap device, ( a software loopback device simulating an Ethernet NIC ).
Mar 06, 2013 Hi Guys, I'm impressed by the Tacx VR trainer and would have bought one before, but they've always been PC only and I use a Mac. I really don't want. Price range $ 7 - $ 1599 Clear filters $ 229.00 Antares $ 16.99. And receive the latest product updates and Tacx news! Leave this field empty if you're. Tacx trainer crackle. Tacx Flow Smart Trainer, Full Connect Version, Includes ANT+ Dongle, Ready for Zwift, Training Base, Electro Brake, Simulate 6% Slope. Innovative In-house manufacturing High quality bike trainers, rollers & training software. Tacx Trainer Software 3 Crackle Rang Link / Beschreibung Rein Raus; 1 ** StreamKiste.TV » HD Movie Streams** Bei StreamKiste.tv findet Ihr stets aktuelle Kinofilme, HD Movies kostenlos als online. Buy The Architect license is the highest license level you can purchase.
This tap device is added to a bridge, by default vmbr0 in Proxmox VE. In this mode, VMs have direct access to the Ethernet LAN on which the host is located. in the alternative NAT mode, each virtual NIC will only communicate with the Qemu user networking stack, where a built-in router and DHCP server can provide network access. This built-in DHCP will serve addresses in the private 10.0.2.0/24 range.
Pass Through Billing Medicare
The NAT mode is much slower than the bridged mode, and should only be used for testing. Start/Shutdown order: Defines the start order priority. Set it to 1 if you want the VM to be the first to be started. (We use the reverse startup order for shutdown, so a machine with a start order of 1 would be the last to be shut down). Startup delay: Defines the interval between this VM start and subsequent VMs starts. Set it to 240 if you want to wait 240 seconds before starting other VMs. Shutdown timeout: Defines the duration in seconds Proxmox VE should wait for the VM to be offline after issuing a shutdown command.
By default this value is set to 60, which means that Proxmox VE will issue a shutdown request, wait 60s for the machine to be offline, and if after 60s the machine is still online will notify that the shutdown action failed. A VM export from a foreign hypervisor takes usually the form of one or more disk images, with a configuration file describing the settings of the VM (RAM, number of cores). The disk images can be in the vmdk format, if the disks come from VMware or VirtualBox, or qcow2 if the disks come from a KVM hypervisor. The most popular configuration format for VM exports is the OVF standard, but in practice interoperation is limited because many settings are not implemented in the standard itself, and hypervisors export the supplementary information in non-standard extensions. Cputype= ( default = kvm64) Emulated CPU type.
Hidden= ( default = 0) Do not identify as a KVM virtual machine. Cpulimit: (0 - 128) ( default = 0) Limit of CPU usage. If the computer has 2 CPUs, it has total of 2 CPU time. Value 0 indicates no CPU limit. Cpuunits: (2 - 262144) ( default = 1024) CPU weight for a VM. Argument is used in the kernel fair scheduler.
The larger the number is, the more CPU time this VM gets. Number is relative to weights of all the other running VMs. Description: Description for the VM. Only used on the configuration web interface. This is saved as comment inside the configuration file.
Efidisk0: file= ,format= ,size= Configure a Disk for storing EFI vars. You can us the lspci command to list existing PCI devices.
Pcie= ( default = 0) Choose the PCI-express bus (needs the q35 machine model). Rombar= ( default = 1) Specify whether or not the device’s ROM will be visible in the guest’s memory map. Romfile= Custom pci device rom filename (must be located in /usr/share/kvm/). X-vga= ( default = 0) Enable vfio-vga device support. Hotplug: ( default = network,disk,usb) Selectively enable hotplug features. This is a comma separated list of hotplug features: network, disk, cpu, memory and usb.
Use 0 to disable hotplug completely. Value 1 is an alias for the default network,disk,usb. Hugepages: Enable/disable hugepages memory. Iden: file= ,aio= ,backup= ,bps= ,bpsmaxlength= ,bpsrd= ,bpsrdmaxlength= ,bpswr= ,bpswrmaxlength= ,cache= ,cyls= ,detectzeroes= ,discard= ,format= ,heads= ,iops= ,iopsmax= ,iopsmaxlength= ,iopsrd= ,iopsrdmax= ,iopsrdmaxlength= ,iopswr= ,iopswrmax= ,iopswrmaxlength= ,mbps= ,mbpsmax= ,mbpsrd= ,mbpsrdmax= ,mbpswr= ,mbpswrmax= ,media= ,model= ,replicate= ,rerror= ,secs= ,serial= ,size= ,snapshot= ,trans= ,werror= Use volume as IDE hard disk or CD-ROM (n is 0 to 3). Aio= AIO type to use. Backup= Whether the drive should be included when making backups. Bps= Maximum r/w speed in bytes per second.
Bpsmaxlength= Maximum length of I/O bursts in seconds. Bpsrd= Maximum read speed in bytes per second.
Bpsrdmaxlength= Maximum length of read I/O bursts in seconds. Bpswr= Maximum write speed in bytes per second. Bpswrmaxlength= Maximum length of write I/O bursts in seconds. Cache= The drive’s cache mode cyls= Force the drive’s physical geometry to have a specific cylinder count.
Detectzeroes= Controls whether to detect and try to optimize writes of zeroes. Discard= Controls whether to pass discard/trim requests to the underlying storage. File= The drive’s backing volume.
Format= The drive’s backing file’s data format. Heads= Force the drive’s physical geometry to have a specific head count. Iops= Maximum r/w I/O in operations per second.
Iopsmax= Maximum unthrottled r/w I/O pool in operations per second. Iopsmaxlength= Maximum length of I/O bursts in seconds. Iopsrd= Maximum read I/O in operations per second. Iopsrdmax= Maximum unthrottled read I/O pool in operations per second. Iopsrdmaxlength= Maximum length of read I/O bursts in seconds.
Iopswr= Maximum write I/O in operations per second. Iopswrmax= Maximum unthrottled write I/O pool in operations per second. Iopswrmaxlength= Maximum length of write I/O bursts in seconds.
Mbps= Maximum r/w speed in megabytes per second. Mbpsmax= Maximum unthrottled r/w pool in megabytes per second.
Mbpsrd= Maximum read speed in megabytes per second. Mbpsrdmax= Maximum unthrottled read pool in megabytes per second. Mbpswr= Maximum write speed in megabytes per second. Mbpswrmax= Maximum unthrottled write pool in megabytes per second.
Media= ( default = disk) The drive’s media type. Model= The drive’s reported model name, url-encoded, up to 40 bytes long. Replicate= ( default = 1) Whether the drive should considered for replication jobs.
Rerror= Read error action. Secs= Force the drive’s physical geometry to have a specific sector count. Serial= The drive’s reported serial number, url-encoded, up to 20 bytes long. Size= Disk size.
This is purely informational and has no effect. Snapshot= Whether the drive should be included when making snapshots. Trans= Force disk geometry bios translation mode.
Werror= Write error action. Keyboard: ( default = en-us) Keybord layout for vnc server.
Default is read from the /etc/pve/datacenter.conf configuration file. Kvm: ( default = 1) Enable/disable KVM hardware virtualization. Localtime: Set the real time clock to local time. This is enabled by default if ostype indicates a Microsoft OS. Lock: Lock/unlock the VM. Machine: (pc pc(-i440fx)?- d+.
D+(.pxe)? q35 pc-q35- d+. D+(.pxe)?) Specific the Qemu machine type. Memory: (16 - N) ( default = 512) Amount of RAM for the VM in MB. This is the maximum available memory when you use the balloon device. Migratedowntime: (0 - N) ( default = 0.1) Set maximum tolerated downtime (in seconds) for migrations.
Migratespeed: (0 - N) ( default = 0) Set maximum speed (in MB/s) for migrations. Value 0 is no limit. Name: Set a name for the VM. Only used on the configuration web interface.
Netn: model= ,bridge= ,firewall= ,linkdown= ,macaddr= ,queues= ,rate= ,tag= ,trunks= ,= Specify network devices. The DHCP server assign addresses to the guest starting from 10.0.2.15. Firewall= Whether this interface should be protected by the firewall. Linkdown= Whether this interface should be disconnected (like pulling the plug). Macaddr= MAC address.
That address must be unique withing your network. This is automatically generated if not specified. Model= Network Card Model. The virtio model provides the best performance with very low CPU overhead.
If your guest does not support this driver, it is usually best to use e1000. Queues= (0 - 16) Number of packet queues to be used on the device. Rate= (0 - N) Rate limit in mbps (megabytes per second) as floating point number. Tag= (1 - 4094) VLAN tag to apply to packets on this interface.
Trunks= VLAN trunks to pass through this interface. Numa: ( default = 0) Enable/disable NUMA. Numan: cpus= ,hostnodes= ,memory= ,policy= NUMA topology. Cpus= CPUs accessing this NUMA node. Hostnodes= Host NUMA nodes to use. Memory= Amount of memory this NUMA node provides. Policy= NUMA allocation policy.
Onboot: ( default = 0) Specifies whether a VM will be started during system bootup. Ostype: Specify guest operating system. This is used to enable special optimization/features for specific operating systems. User reported problems with this option. Protection: ( default = 0) Sets the protection flag of the VM.
This will disable the remove VM and remove disk operations. Reboot: ( default = 1) Allow reboot. If set to 0 the VM exit on reboot. Satan: file= ,aio= ,backup= ,bps= ,bpsmaxlength= ,bpsrd= ,bpsrdmaxlength= ,bpswr= ,bpswrmaxlength= ,cache= ,cyls= ,detectzeroes= ,discard= ,format= ,heads= ,iops= ,iopsmax= ,iopsmaxlength= ,iopsrd= ,iopsrdmax= ,iopsrdmaxlength= ,iopswr= ,iopswrmax= ,iopswrmaxlength= ,mbps= ,mbpsmax= ,mbpsrd= ,mbpsrdmax= ,mbpswr= ,mbpswrmax= ,media= ,replicate= ,rerror= ,secs= ,serial= ,size= ,snapshot= ,trans= ,werror= Use volume as SATA hard disk or CD-ROM (n is 0 to 5).
Aio= AIO type to use. Backup= Whether the drive should be included when making backups. Bps= Maximum r/w speed in bytes per second. Bpsmaxlength= Maximum length of I/O bursts in seconds.
Bpsrd= Maximum read speed in bytes per second. Bpsrdmaxlength= Maximum length of read I/O bursts in seconds. Bpswr= Maximum write speed in bytes per second. Bpswrmaxlength= Maximum length of write I/O bursts in seconds. Cache= The drive’s cache mode cyls= Force the drive’s physical geometry to have a specific cylinder count. Detectzeroes= Controls whether to detect and try to optimize writes of zeroes. Discard= Controls whether to pass discard/trim requests to the underlying storage.
File= The drive’s backing volume. Format= The drive’s backing file’s data format. Heads= Force the drive’s physical geometry to have a specific head count.
Iops= Maximum r/w I/O in operations per second. Iopsmax= Maximum unthrottled r/w I/O pool in operations per second. Iopsmaxlength= Maximum length of I/O bursts in seconds. Iopsrd= Maximum read I/O in operations per second.
Iopsrdmax= Maximum unthrottled read I/O pool in operations per second. Iopsrdmaxlength= Maximum length of read I/O bursts in seconds. Iopswr= Maximum write I/O in operations per second.
Iopswrmax= Maximum unthrottled write I/O pool in operations per second. Iopswrmaxlength= Maximum length of write I/O bursts in seconds. Mbps= Maximum r/w speed in megabytes per second.
Mbpsmax= Maximum unthrottled r/w pool in megabytes per second. Mbpsrd= Maximum read speed in megabytes per second. Mbpsrdmax= Maximum unthrottled read pool in megabytes per second. Mbpswr= Maximum write speed in megabytes per second. Mbpswrmax= Maximum unthrottled write pool in megabytes per second.
Media= ( default = disk) The drive’s media type. Replicate= ( default = 1) Whether the drive should considered for replication jobs. Rerror= Read error action. Secs= Force the drive’s physical geometry to have a specific sector count. Serial= The drive’s reported serial number, url-encoded, up to 20 bytes long. Size= Disk size.
This is purely informational and has no effect. Snapshot= Whether the drive should be included when making snapshots. Trans= Force disk geometry bios translation mode. Werror= Write error action. Aio= AIO type to use. Backup= Whether the drive should be included when making backups.
Bps= Maximum r/w speed in bytes per second. Bpsmaxlength= Maximum length of I/O bursts in seconds. Bpsrd= Maximum read speed in bytes per second.
Bpsrdmaxlength= Maximum length of read I/O bursts in seconds. Bpswr= Maximum write speed in bytes per second.
Bpswrmaxlength= Maximum length of write I/O bursts in seconds. Cache= The drive’s cache mode cyls= Force the drive’s physical geometry to have a specific cylinder count. Detectzeroes= Controls whether to detect and try to optimize writes of zeroes. Discard= Controls whether to pass discard/trim requests to the underlying storage. File= The drive’s backing volume.
Format= The drive’s backing file’s data format. Heads= Force the drive’s physical geometry to have a specific head count. Iops= Maximum r/w I/O in operations per second. Iopsmax= Maximum unthrottled r/w I/O pool in operations per second. Iopsmaxlength= Maximum length of I/O bursts in seconds. Iopsrd= Maximum read I/O in operations per second.
Iopsrdmax= Maximum unthrottled read I/O pool in operations per second. Iopsrdmaxlength= Maximum length of read I/O bursts in seconds. Iopswr= Maximum write I/O in operations per second.
Iopswrmax= Maximum unthrottled write I/O pool in operations per second. Iopswrmaxlength= Maximum length of write I/O bursts in seconds. Iothread= Whether to use iothreads for this drive mbps= Maximum r/w speed in megabytes per second. Mbpsmax= Maximum unthrottled r/w pool in megabytes per second. Mbpsrd= Maximum read speed in megabytes per second.
Mbpsrdmax= Maximum unthrottled read pool in megabytes per second. Mbpswr= Maximum write speed in megabytes per second. Mbpswrmax= Maximum unthrottled write pool in megabytes per second. Media= ( default = disk) The drive’s media type. Queues= (2 - N) Number of queues. Replicate= ( default = 1) Whether the drive should considered for replication jobs.
Rerror= Read error action. Scsiblock= ( default = 0) whether to use scsi-block for full passthrough of host block device. Can lead to I/O errors in combination with low memory or high memory fragmentation on host secs= Force the drive’s physical geometry to have a specific sector count. Serial= The drive’s reported serial number, url-encoded, up to 20 bytes long. Size= Disk size. This is purely informational and has no effect. Snapshot= Whether the drive should be included when making snapshots.
Trans= Force disk geometry bios translation mode. Werror= Write error action. Scsihw: ( default = lsi) SCSI controller model serialn: (/dev/.+ socket) Create a serial device inside the VM (n is 0 to 3), and pass through a host serial device (i.e. /dev/ttyS0), or create a unix socket on the host side (use qm terminal to open a terminal connection).
Family= Set SMBIOS1 family string. Manufacturer= Set SMBIOS1 manufacturer. Product= Set SMBIOS1 product ID. Serial= Set SMBIOS1 serial number.
Sku= Set SMBIOS1 SKU string. Uuid= Set SMBIOS1 UUID. Version= Set SMBIOS1 version. Smp: (1 - N) ( default = 1) The number of CPUs. Please use option -sockets instead. Sockets: (1 - N) ( default = 1) The number of CPU sockets.
Startdate: (now YYYY-MM-DD YYYY-MM-DDTHH:MM:SS) ( default = now) Set the initial date of the real time clock. Valid format for date are: now or 2006-06-17T16:01:21 or 2006-06-17. Startup: `order= d+ ,up= d+ ,down= d+ ` Startup and shutdown behavior.
Order is a non-negative number defining the general startup order. Shutdown in done with reverse ordering. Additionally you can set the up or down delay in seconds, which specifies a delay to wait before the next VM is started or stopped. Tablet: ( default = 1) Enable/disable the USB tablet device. This device is usually needed to allow absolute mouse positioning with VNC.
Else the mouse runs out of sync with normal VNC clients. If you’re running lots of console-only guests on one host, you may consider disabling this to save some context switches. This is turned off by default if you use spice (-vga=qxl). Tdf: ( default = 0) Enable/disable time drift fix. Template: ( default = 0) Enable/disable Template. Unusedn: Reference to unused volumes.
This is used internally, and should not be modified manually. Usbn: host= ,usb3= Configure an USB device (n is 0 to 4). The value spice can be used to add a usb redirection devices for spice.
Usb3= ( default = 0) Specifies whether if given host option is a USB3 device or port (this does currently not work reliably with spice redirection and is then ignored). Vcpus: (1 - N) ( default = 0) Number of hotplugged vcpus. Vga: Select the VGA type. If you want to use high resolution modes (= 1280x1024x16) then you should use the options std or vmware. Default is std for win8/win7/w2k8, and cirrus for other OS types.
The qxl option enables the SPICE display sever. For win. OS you can select how many independent displays you want, Linux guests can add displays them self. You can also run without any graphic card, using a serial device as terminal. Virtion: file= ,aio= ,backup= ,bps= ,bpsmaxlength= ,bpsrd= ,bpsrdmaxlength= ,bpswr= ,bpswrmaxlength= ,cache= ,cyls= ,detectzeroes= ,discard= ,format= ,heads= ,iops= ,iopsmax= ,iopsmaxlength= ,iopsrd= ,iopsrdmax= ,iopsrdmaxlength= ,iopswr= ,iopswrmax= ,iopswrmaxlength= ,iothread= ,mbps= ,mbpsmax= ,mbpsrd= ,mbpsrdmax= ,mbpswr= ,mbpswrmax= ,media= ,replicate= ,rerror= ,secs= ,serial= ,size= ,snapshot= ,trans= ,werror= Use volume as VIRTIO hard disk (n is 0 to 15).
Aio= AIO type to use. Backup= Whether the drive should be included when making backups. Bps= Maximum r/w speed in bytes per second. Bpsmaxlength= Maximum length of I/O bursts in seconds. Bpsrd= Maximum read speed in bytes per second.
Bpsrdmaxlength= Maximum length of read I/O bursts in seconds. Bpswr= Maximum write speed in bytes per second. Bpswrmaxlength= Maximum length of write I/O bursts in seconds. Cache= The drive’s cache mode cyls= Force the drive’s physical geometry to have a specific cylinder count.
Detectzeroes= Controls whether to detect and try to optimize writes of zeroes. Discard= Controls whether to pass discard/trim requests to the underlying storage. File= The drive’s backing volume. Format= The drive’s backing file’s data format. Heads= Force the drive’s physical geometry to have a specific head count. Iops= Maximum r/w I/O in operations per second. Iopsmax= Maximum unthrottled r/w I/O pool in operations per second.
Iopsmaxlength= Maximum length of I/O bursts in seconds. Iopsrd= Maximum read I/O in operations per second. Iopsrdmax= Maximum unthrottled read I/O pool in operations per second.
Iopsrdmaxlength= Maximum length of read I/O bursts in seconds. Iopswr= Maximum write I/O in operations per second. Iopswrmax= Maximum unthrottled write I/O pool in operations per second.
Iopswrmaxlength= Maximum length of write I/O bursts in seconds. Iothread= Whether to use iothreads for this drive mbps= Maximum r/w speed in megabytes per second.
Mbpsmax= Maximum unthrottled r/w pool in megabytes per second. Mbpsrd= Maximum read speed in megabytes per second.
Mbpsrdmax= Maximum unthrottled read pool in megabytes per second. Mbpswr= Maximum write speed in megabytes per second. Mbpswrmax= Maximum unthrottled write pool in megabytes per second. Media= ( default = disk) The drive’s media type.
Replicate= ( default = 1) Whether the drive should considered for replication jobs. Rerror= Read error action. Secs= Force the drive’s physical geometry to have a specific sector count. Serial= The drive’s reported serial number, url-encoded, up to 20 bytes long. Size= Disk size.
This is purely informational and has no effect. Snapshot= Whether the drive should be included when making snapshots. Trans= Force disk geometry bios translation mode. Werror= Write error action. Vmstatestorage: Default storage for VM state volumes/files.
![]()
Watchdog: model= ,action= Create a virtual hardware watchdog device. Once enabled (by a guest action), the watchdog must be periodically polled by an agent inside the guest or else the watchdog will reset the guest (or execute the respective action specified).
I know Xen didn't use to do it, but I was also under the impression that it had been on the 'it's coming' list for so long that it must be there by now. Vmware and virtualbox have had serial and usb pass through for a while. It's only now that I'm looking at how I'm going to build a server for a customer next week, that I've discovered this thread, and I'm disappointed to see that it's still not there, and it sounds like it's still low priority and a long way off, if at all.
I've managed to do the fiddle with the iSCSI driver to get windows guests to access a tape on the host and other fiddles with 'network' USB servers to get a windows guess to be able to see a USB security dongle on the host is there some other ('network'?) method of allowing a windows (sbs) or linux (hylfax) guest to be able to see a 9600 baud fax modem on a serial port? One of the critical questions about the Serial Port native integration is the necessity in the Industrial Automation process or even in little RS-232 controllers apps hosted inside the XenServer.
We face this environment with our Parking Control System, where all the communication between the Database Interface and the Hardware is made by a RS-232 and RS-485 network. I'm a senior user of VMware Server, VSphere and lately XenServer solution, and I'm glad to say that you've made a huge step with the friendly interface and the major features for clustering process and also physical resources sharing. That's why I'm so interested in the RS-232 capability, this would solve our needs in 120%. Hope to hear news ASAP. No only me but I think everyone that works with Automation processes that heavily depends on serial communication interfaces.
Hi Danilo, Feature requests are better suited when posted in the correct forum. I do understand the requirement of serial pass-through and this can be done on platforms (VMWare) that have full-hardware viurtualization with no pass-through to the physical host. As a work around, it is best to use serial over IP. I know this has been requested for sometime, but there needs to be a business case for it. The more people who request it, the better likelihood of it getting focused on. If only 1 customer or handful, request it, then it is less likely than a thousand requesting HA (pre-XenServer 5.0).
I'm sorry that we have not had this implemented at this time. Regards, James Cannon. I have spent the last several weeks fighting tooth & nail to get a modem bank to be shared across several VM's and I would like to say that we tried every single product we could find and afford. It boiled down to a seperate physical fax server with multiport modems, that could be shared.
We also found that one product would allow class 2 & 2.0 fax but not class 1. MSfax is hard coded to class one, this is a problem if you need it. VMware ESX allows communications to the physical com ports, but we were unable to get them to work reliably, even with supports help.
So please get this working soon. Roger, Citrix isn't hearing these complaints from users because users are choosing to not use your platform due to issues like this.
Serial Passthrough is fundamentally basic. This attitude from Citrix of 'if we don't hear thousands of complaints over the same issue, we won't do anything about it' is just not good business. Only the most vocal customers will come to forums to vent frustrations or suggest new features, the rest just don't use your products. This is a reason why XenCenter is still Windows-only, etc, etc, etc. Citrix, you have a great product here, but you are standing in it's way.
Well, yes, true. If one hacks the system, one has to take full responsibility for that and not complain to Citrix if something breaks. And, yes, it's not an appropriate way to have to deal with issues like this. As mentioned at great length in that other thread, I really feel that rapid development has to involve the elements of users, projects, and vendors. Any bottleneck on the part of one or the other will hold up the process; it has to be synergistic and efficient, and there should be a way for the public to make more direct contributions. The Linux community has provided RPMs for ages that provide easy-to-add functionality. Apache has supported modules for the Web server.
Smart phones have apps. There should some sort of plug-in mechanism for customizations that will still work without jeopardizing the integrity of the underlying structure of the product. Linux kernel modules are great compared to having to rebuild the kernel each time you wanted to make a change, like back in the mid 1980s (I sure don't miss that). There are no simple answers to complex processes like these. However, dialogs like this help stress the importance of these points.
I, for one, would love to see a master class in which users present ideas and desires ahead of time and the panel openly discusses the pros, cons and what's considered feasible regarding the various inputs. Tobias that is a great idea regarding the Master Class focusing on whatever users want/desire. It would help to address some of the users complaints or challenges when deploying and maintaining a XenServer infrastructure at any scale. On the flip-side, it's unlikely to make much headway. Specifically speaking about the Serial and USB Passthrough, this is something Citrix has awknowledged a few times as people needing/wanting but they have for years chosen to not implement it (even in the light of other leading Hypervisors implementing similar technology, and it being in the Upstream codebase). I don't know why, it would seem to me that the more generalized features they can pack into the product, the more attractive it will be.
You've mentioned before that it's rather common for most shops to run multiple hypervisors - I agree. But why is this? It's simple - no single Hypervisor in today's time satisfies every itch. As a loyal Citrix fanboy, I really badly want XenServer to become that Hypervisor that can do it all, so that I can continue to use Citrix and their awesome products. In order to get there, Citrix needs to address some of these rather mundane and boring details such as USB passthrough and cross-platform/platform-agnostic administration tools instead of continuing to build new sexy features. A weak foundation will not support a top heavy building, we must construct as robust and strong a foundation as possible then build upwards towards the sky!
As it sits - the more work I do in XenServer the more I'm realizing it's shortcomings. Both in technology and in the backing company's decisions (for better or worse). It sucks because I'm finding myself starting to pay more attention to the other hypervisor choices out there and wondering if I could make better use of those instead? Of course the Xen project is always there and I could roll my own, but I rather enjoy the packaged XenServer and it's enterprise features that are built in and of course the Citrix support if/when needed. Currently I'm feeling like Citrix is slowly carving me out of the equation, which means my enterprise licencing will be going with me (if that happens).
Master classes typically have attendance between a few hundred and up to around a thousand - not tiny numbers, and a lot of die-hard users, so as a venue, it's reaching a rather targeted audience and compared to conferences, is relatively inexpensive to put on. As mentioned before, the reason for multiple hypervisors depends partly on the local expertise, and you're also correct that there is no 'one size fits all' hypervisor.
The same can be said for cars, and to build on that analogy, it'd be hard to imagine an automobile that could really fir every need not to mention that if there were not any serious competition, there would not be enough impetus to drive development. Personally, I see the next stage of computer development to be hopefully more of a 'Lego' model where you just buy or get kits for free and plug them into your environment and the smarts behind it are all buried down at layers that most people don't have to worry about, yet they mesh simply and seamlessly with the existing infrastructure. This is where the whole issue of proprietary drivers and interfaces factors in, and either the the lack of standards or the presence of too many standards becomes a support nightmare. But, one can always dream. Back to reality, the BYOD needs to be transmogrified to also mean 'build your own distribution,' in the sense that the server platform should be easy to build upon and ultimately, all it does is distribute services from a centrally managed pool of devices and applications.
Cloud computing needs to move in that direction, especially where admins don't have access to a physical computer room where - if needed - they can hit the power button. XenServer is nice in that way - you really don't have to know much about Linux to install it and many of the functions can be done via XenCenter, without a deep understanding of what all goes on in detail. Smart devices are not much different - people manage to use them very well and there are literally hundreds of thousands of apps that all work seamlessly. Plugging in an RPM comes close, but there are still 'gotchas 'like library version compatibility, shared vs. Stand-alone libraries, etc. Cotherm thermostat. To get SaaS to really take off, platforms need to be mode agnostic and XenDesktop and Receiver/Worx are good steps in this direction - attention now needs to be refocused on the delivery platform. You're correct that the Xen Project still acts as a clearing house of sorts and ultimately, it's up to whomever creates a product (like XenServer) to incorporate what is developed in Xen, but I do feel that Citrix' opening this up to a much wider community could crank up the interest and productivity of developers to work closer to deliver the sorts of features that even not huge numbers of people need or want.
The danger is always to avoid a 'bloatware' product, hence the thought that being bale to pick and choose what specific features you'd like and have more custom control over the individual installation would be really super. In other words, maintain a lean and powerful XenServer base and allow that platform to be expanded upon in a modular way (sort of like adding appliances like WLB and DVSC, but better). Good discussion.
I wish others would chime in here, as Citrix and the community need to discuss things like this openly, and it's great that this form can provide such an opportunity. BTW, Alupis, where are you located? I'm in Northern California.for better or for worse!;-P The modular design is intriguing. I can envision something similar to how you install most Linux distro's in today's time. So basically during the initial install of XenServer you would just select which components or 'Extra Packs' (as they used to do with Linux Guest support) you wish to use. This would enable users to control which features they want/need and balance that themselves to a level they feel is reasonably non-bloated, etc. You're right, some people may not have a need for USB PassThrough and therefore do not want it included in XenServer Base because it creates a slightly larger footprint, etc (probably a poor example because my understanding of USB and Serial Passthrough are they are trivial to implement at this point, but you get the gist).
In my own fantasy land I could see an Al La Carte licensing model from Citrix where you could pick and choose which features you needed and only pay for those. So for example, a small shop that does not require HA shouldn't have to pay for it, but say they do need Role Based Administration.
With a modular license model and modular feature model, not only could they pay less (which is likely to be more than they would have paid before since they could not justify the entire license cost), but they could have a 'Slimmed' down version of XenServer with only the features they need. To get around significantly tighter margins on selling Al La Carte licensing options, Citrix could just offer no support or only Forums support for those options while the regular full licensing model options would get the whole shebang. To prod at your automobile analogy, I do agree that there may not be a 'one size fits all' for every situation. You wouldn't buy a Mini-Van to go Off-Roading would you? But, even so, both the Mini-Van and the Off-Road Truck likely both come with standard features such as a Radio, A/C, Windows that roll up and down, etc. XenServer lacking certain 'normal' features such as USB Passthrough and/or Serial Passthrough is like buying a new car with no windows. It works out great most of the time (summer months), but in the winter you really wish you could roll those windows up (or plug in a USB device).
I'll admit, I have not attended any Master Classes as-of-yet. Most of the webinar invites I get from Citrix cover stuff I have no interest in or stuff I already know. Opening up a Master Class to specifically discuss the roadmap of the product with it's actual users would be great. Submitting questions a week or two prior to the event would be necessary for Citrix to prepare appropriate responses. However, I'd fear Citrix cherry picking the easy/good-PR ones and skipping the rest. No company likes to be bombarded by users demanding stuff. Even though they should listen more carefully.
Master Classes can be quite interactive - you'd be surprised, I think There is a real-time PM session running with Citrix people answering or responding to questions, sometimes even passing them on to the speakers. There are guest who come on and also sometime interactively take audience questions. Sure, for most advanced users, there's not always a lot of in-depth material, but to sense what the public is looking for plus the real-time chat sessions and Q&As can be very informative. If Citrix were willing to do a full session based on Q&A, I think it would go over very well. That all said, events like Synergy really allow one to track down and talk to key Citrix people, as well as vendors and other users.
I found it very informative and inspiring. The one-on-one discussions tend to be the most interesting, because you can push outside the standard lines of a presentation, or corner and engineer and get some concrete information. The 'a la carte' model gets complicated from a billing and tracking viewpoint. Maybe stick with the four tiers or so, but allow flexibility within that tier to add or subtract features based on licensing levels. This, of course, can lead to support complexities, but really wouldn't need to be that much different from an RPM model or adding appliances.
Citrix recognizes, I think, that having too many appliances and special stand-alone components makes the architecture much harder to support. Some things, though, like better built-in backup/restore and disaster recovery features should IMHO not be left to third parties.
The Marathon zero downtime redundancy option, I believe, isn't even supported past XS 5.6. Things like this compelled us to develop our own in-house mechanism (which we give away for free on Citrix' code share site) and I'm pushing for some updates in future releases to make DR much simpler based on mostly existing capabilities. It's areas like this where the consumers need t express their needs and point out the limitations. In any case, I enjoy this sort of banter and hope others - both Citrites and end users - will take notice and chime in. There's no need for this to be primarily a two-person discussion! Wow this is fantastic news!
I just 'Starred' the Repo and copied it to a 'Fork' in my github so I can take a closer look at the code. C# is very similar syntaxtically to Java, so perhaps a straight port may be possible. (although this is a rather large program so any port will be complex none-the-less). There may be issues finding suitable replacements for the various libraries used in the project.
Some will port easy such as log4net.dll will be replaced with Apache's log4j.jar library - nunit will be replaced with junit, etc. The other's I'm not sure what they do as-of-yet, so may take some digging. This is awesome for so many reasons I can hardly contain myself! I foresee some awesome plugins/mods of XenCenter coming out from the community within a short period of time - bringing new awesome functionality to the program! On the flip-side, I hope this is not a pre-cursor to Citrix dropping this project and not officially supporting it anymore (as Mozilla has tried to do with Thunderbird, etc. Just drop it to the community and say 'here, now you take care of this'). Citrix should continue to innovate with XenCenter (or XenAdmin for the FOSS version).
Re your last paragraph, @Alupis: On the contrary, this is Citrix ensuring the future of XenServer (including XenCenter). We believe that only open source software can survive in the cloud world, and also that only by having an open development model (which is more than just open code) can we be responsive to the needs of customers. For more, see these questions on xenserver.org: PS Have you considered compiling XenCenter in Mono rather than porting it to Java? My guess is that would be easier, although some of the libraries might still be problematic.
Comments are closed.
|
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |