Showing posts with label vmware. Show all posts
Showing posts with label vmware. Show all posts

Wednesday, March 19, 2008

esx: killing a stuck VM from the command line - redux!

NOTE: The method in this post seems more accurate and effective than the one in the previous post on this blog, "killing a stuck VM from the command line."


Instructions on how to forcibly terminate a VM if it is unresponsive to the VI client


Here you will be terminating the Master World and User Worlds for the VM which in turn will terminate the VM's processes.



1. First list the running VMs to determine the VM ID for the affected VM:
#cat /proc/vmware/vm/*/names

vmid=1076 pid=-1 cfgFile="/vmfs/volumes/50823edc-d9110dd9-8994-9ee0ad055a68/vc using sql/vc using sql.vmx" uuid="50 28 4e 99 3d 2b 8d a0-a4 c0 87 c9 8a 60 d2 31" displayName="vc using sql-192.168.1.10"

vmid=1093 pid=-1 cfgFile="/vmfs/volumes/50823edc-d9110dd9-8994-9ee0ad055a68/esx_template/esx_template.vmx" uuid="50 11 7a fc bd ec 0f f4-cb 30 32 a5 c0 3a 01 09" displayName="esx_template"

For this example we will terminate the VM at vmid='1093'




2. We need to find the Master World ID, do this type:
# less -S /proc/vmware/vm/1093/cpu/status

Expand the terminal or scroll until you can see the right-most column. This is labelled 'group'. Unterneath the column you will find: vm.1092.

In this example '1092' is the ID of the Master World.




3. Run this command to terminate the Master World and the VM running in it:

/usr/lib/vmware/bin/vmkload_app -k 9 1092




4. This should kill all the VM's User Worlds and also the VM's processes.

If Successful you will see similar:

# /usr/lib/vmware/bin/vmkload_app --kill 9 1070
Warning: Jul 12 07:24:06.303: Sending signal '9' to world 1070.

If the Master World ID is wrong you may see:
# /usr/lib/vmware/bin/vmkload_app --kill 9 1071
Warning: Jul 12 07:21:05.407: Sending signal '9' to world 1071.
Warning: Jul 12 07:21:05.407: Failed to forward signal 9 to cartel 1071: 0xbad0061



source

Tuesday, March 27, 2007

esx: obtaining a vm's ip address from the command line

You can get a VM's IP adrress just using:

vmware-cmd [vmx_path] getguestinfo "ip"

When the guest operating system is running inside a virtual machine, you can pass information
from a script (running in another machine) to the guest operating system, and from the guest
operating system back to the script, through the VMware Tools service. You do this by using a class of shared variables, commonly referred to as GuestInfo.
VMware Tools must be installed and running in the guest operating system before a GuestInfo variable can be read or written inside the guest operating system. (source: VMware Scripting API - 2.3 User's Manual)

source

Monday, March 26, 2007

esx: killing a stuck vm from the command line


On the ESX 3 console find the PID by:

ps -ef|grep [VM name]
and then kill it by:
kill -9 PID




source

esx: stopping a vm from the command line using powerop_mode

Login to the Service Console and try the following:


vmware-cmd [vm-cfg-path] stop [powerop_mode]

where [vm-cfg-path] is the location of the vmx file for the VM and [powerop_mode] is either hard, soft or trysoft .

It is tempting to just use hard for the [powerop_mode] when it appears that the VM is really stuck :)




source


Wednesday, March 21, 2007

esx: ide vs sata

Installation on IDE or SATA Drives:

(source: vi3_esx_quickstart.pdf)




The installer displays a warning if you attempt to install ESX Server software on an IDE drive or a SATA drive in ATA emulation mode. It is possible to install and boot ESX Server software on an IDE drive. However, VMFS, the filesystem on which virtual machines are stored, is not supported on IDE or SATA. An ESX Server host must have SCSI storage, NAS, or a SAN on which to store virtual machines.

Monday, March 5, 2007

esx: DMZ within a single esx box

dmz in a box using esx







DMZ in a box



In this example, we have four virtual machines running two Firewalls, a Web server and an Application Server to create a DMZ. The Web server and Application server sit in the DMZ between the two firewalls. External traffic from the Internet (labeled External) is verified by the firewall inside the VM, and if authorized routed to the virtual switch in the DMZ – the switch in the middle. The Web Server and Application Server are connected to this switch and hence can serve external requests.





This switch is also connected to a firewall that sits between the DMZ and the internal corporate network (labeled Internal). This second firewall filters packets and if verified, routes them to the VMNIC0, connected to the internal corporate network. Hence a complete DMZ can be built inside a single ESX Server. Because of the isolation between the various virtual machines, even if one of them were to be compromised by, say, a virus the

other virtual machines would be unaffected.















Source1

Source2: tommy walker ppt - Virtualization Reducing Costs, Time and Effort with VMware (2002)

Friday, March 2, 2007

esx: you can't run it on a vm!

Running ESX on a VM - vmware.esx-server | Google Groups:

">On Feb 26, 7:03 pm, "yy" ...@yahoo.com.ph> wrote:
>Has anyone successfully setup/ran an ESX on a Virtual Machine? I need
> to do this as a proof of concept before dealing with real hardware.

ESX server won't run on in a VM virtualized by ESX server.

I've tried. There is something ESX looks for in the CPU that is not
virtualized by ESX server."

Thursday, February 22, 2007

vmware: virtual center 2.0 - feature highlights

VMware Consolidated Backup
Consolidated Backup enables offloaded and impact-free backup for virtual machines running on an ESX Server system by allowing traditional file-based backup software to leverage VMware virtual machine snapshot technology and efficient SAN-based data transfer.

Remote CD/Floppy Access
Using either the Virtual Infrastructure Client or Virtual Infrastructure Web Access, a virtual machine can access a CD or floppy device from the client's machine.

VMware HA
VMware HA (High Availability) increases the availability of virtual machines by detecting host failures and automatically restarting virtual machines on other available hosts. HA operates on a set of ESX Server 3.0 hosts that have been grouped into a cluster with HA enabled.

VMware DRS
Distributed Resource Scheduling allows resources from all hosts in a cluster to be treated as a single, aggregated pool. When changes occur in the environment, DRS can tune the resource scheduling on individual hosts as well as use VMotion to rebalance workload across hosts in the cluster. When a virtual machine is powered on, DRS calculates the optimal host on which to start it, given current resource levels and the resource configuration of the new virtual machine.

Resource Pools
A resource pool provides a way of subdividing the resources of a stand-alone host or a cluster into smaller pools. A resource pool is configured with a set of CPU and memory resources that are shared by the virtual machines that run in the resource pool. Resource pools can be nested.

VMotion
- Virtual machine migrations while powered on (VMotion) are also all fully operational and enable migrations between two ESX Server 3.0 hosts or between two ESX Server 2.x hosts.
- Virtual machine migrations while powered off (cold migrations) are all fully operational and enable migrations between two ESX Server 3.x hosts or between ESX Server 3.x and ESX Server 2.x hosts.

esx: esx 3.0 feature highlights

  • NAS and iSCSI Support
    ESX Server 2.x could store virtual machines only on SCSI disks and on Fibre Channel SANs. ESX Server 3.0 can store virtual machines on NAS and iSCSI. iSCSI LUNs, like Fibre Channel LUNs, can be formatted with the VMware file system (VMFS). Each virtual machine resides in a single directory. Network attached storage (NAS) appliances must present file systems over the NFS protocol for ESX Server to be able to use them. NFS mounts are used like VMFS with ESX Server creating one directory for each virtual machine.

    Four-way Virtual SMP and 16 GB Memory Available to Guest Operating Systems
    Virtual machines can now have up to 4 processors (up from 2) and 16 GB of RAM (up from 3.6 GB) allocated to them.

  • ESX Server Clusters
    VirtualCenter 2.x introduces the notion of a cluster of ESX Server hosts. A cluster is a collection of hosts that can be managed as a single entity. The resources from all the hosts in a cluster are aggregated into a single pool. A cluster looks like a stand-alone host, but it typically has more resources available.

  • 64-Bit OS Virtual Machines
    64-bit guest operating systems are experimentally supported and visible in the Virtual Infrastructure Client interface, with full support available in future VI3 releases.

  • Hot-Add Virtual Disk
    ESX Server 3.0 supports adding new virtual disks to a virtual machine while it is running. This is useful with guest operating systems capable of recognizing hot-add hardware.

  • VMFS 3
    There is a new generation of VMFS in ESX Server 3.0. Scalability, performance, and reliability have all improved. Furthermore, subdirectories are now supported. ESX Server system will create a directory for each virtual machine and all its component files.

  • Potential Scalability Bottlenecks Have Been Removed
    In ESX Server 2.x, one vmx process per running virtual machine ran in the service console to implement certain virtual machine functionality. In ESX Server 3.x, these processes are no longer bound to the service console but instead are distributed across a server's physical CPUs.


  • New Guest SDK Available
  • The VMware Guest SDK allows software running in the guest operating system in a VMware ESX Server 3.0 virtual machine to collect certain data about the state and performance of the virtual machine. Download the Guest SDK package from www.vmware.com/support/developer/
  • VMware Tools Auto-Upgrade
  • VMware Infrastructure 3 (ESX Server 3.0/VirtualCenter 2.0) supports the ability to install or upgrade VMware Tools on multiple virtual machines at the same time without needing to interact with each virtual machine. Detailed instructions are provided in the Installation and Upgrade Guide.

esx: vmfs 2 overview

(source: vmware)



While conventional file systems allow only one server to have read-write access to the same file at a given time, VMFS is a cluster file system that leverages shared storage to allow multiple instances of ESX Server

to read and write to the same storage, concurrently.
VMFS allows :



  • Easier VM management: Greatly simplify virtual machine provisioning and administration by efficiently

    storing the entire virtual machine state in a central location.
  • Live Migration of VMS: Support unique virtualization-based capabilities such as live migration of running virtual machines from one physical server to another, automatic restart of a failed virtual machine on a separate physical server, and clustering virtual machines across different physical servers.
  • Performance: Get virtual disk performance close to native SCSI for even the most data-intensive applications with dynamic control of virtual storage volumes.
















  • Concurrent access to storage: Enable multiple installations of ESX Server to read and write from the same storage location concurrently.
  • Dynamic ESX Server Modification: Add or delete an ESX Server from a VMware VMFS volume without disrupting other ESX Server hosts.
  • VMFS volume resizing on the fly: Create

    new virtual machines without relying on a storage administrator.

    Adaptive block sizing and addressing for growing files allows you to

    increase a VMFS volume on the fly.
  • Automatic LUN mapping: Simplify storage management with automatic discovery and mapping of LUNs to a VMware VMFS volume.
  • I/O parameter tweaking: Optimize your virtual machine I/O with adjustable volume, disk, file and block sizes.
  • Failure recovery: Recover virtual machines faster and more reliably in the event of server failure with Distributed journaling.

VMware VMFS cluster file system is included in VMware Infrastructure

Enterprise and Standard Editions and is available for local storage

only with the Starter edition.


vmware: virtual infrastructure 3 overview

VMware Infrastructure 3 is a feature-rich suite that delivers the production-proven efficiency, availability, and

dynamic management needed to create a responsive data center. The suite includes:









esx: the vmfs file system

VMFS - Wikipedia: "VMFS is VMware's SAN file system. (Other examples of SAN file systems are Global File System, Oracle Cluster File System). VMFS is used solely in the VMware flagship server product, the ESX. It was developed and is used to store virtual machine disk images, including snapshots.


- Multiple servers can read/write the same filesystem simultaneously
, while
- Individual virtual machine files are locked
- VMFS volumes can be logically 'grown' (nondestructively increased in size) by spanning multiple VMFS volumes together.

There are three versions of VMFS: VMFS1, VMFS2 and VMFS3.

* VMFS1 was used by the ESX 1.x which is not sold anymore. It didn't feature the cluster filesystem properties and was used only by a single server at a time. VMFS1 is a flatfilesystem with no directory structure.

* VMFS2 is used by ESX 2.x, 2.5.x and ESX 3.x. While ESX 3.x can read from VMFS2, it will not mount it for writing. VMFS2 is a flatfilesystem with no directory structure.

* VMFS3 is used by ESX 3.x. As a most noticeable feature, it introduced directory structure in the filesystem. Older versions of ESX can't read or write on VMFS3 volumes. Beginning from ESX 3 and VMFS3, also virtual machine configuration files are stored in the VMFS partition by default."

vmware: known issues

Source: VMware - Wikipedia

Hardware support

issues:

  • VMware virtual machines do not support FireWire.
  • VMware virtual machines provide no direct USB 2.0 support, but make USB 2.0 devices in the host operating-system visible to the guest operating-system as USB 1.1 devices. (VMware Workstation 6.0 will offer USB 2.0 support.)
  • VMware virtual machines provide only experimental support for 3D hardware acceleration.
  • From ESX 2.x to ESX 3.x it is no longer possible to connect/disconnect the CD-ROM through the console session.


vmware: known issues with newer linux kernels

Source: VMware - Wikipedia:

"Older versions of VMware seem unable to run newer versions of Linux (kernel 2.4 series seem to panic when run on VMware 2.x; and 2.6 series kernels, when run on VMware 3.x, give a protection error). VMware Workstation has now (as of Nov 2006) reached version 5.5.3, which supports these newer operating systems and kernels. However, the latest versions of the 2.6.x kernel require a patch to use all the VMware features — even when using VMware Workstation 5.0 or 5.5. This patch, freely available as vmware-any-any-updatexxx (as of 29 January 2007: update107), comes via the Czech Technical University."

Wednesday, February 21, 2007

virtualization: types of hypervisors

From the Wikipedia Definition:

"In computing, a hypervisor (also: virtual machine monitor) is a virtualization platform that allows multiple operating systems to run on a host computer at the same time. The term usually refers to an implementation using full virtualization. Hypervisors are currently classified in two types:

* Type 1 hypervisor
(e.g. ESX Server)
is software that runs directly on a given hardware platform (as an operating system control program). A 'guest' operating system thus runs at the second level above the hardware. The classic type 1 hypervisor was CP/CMS, developed at IBM in the 1960s, ancestor of IBM's current z/VM. More recent examples are Xen, VMware's ESX Server, and Sun's Hypervisor (released in 2005).

* Type 2 hypervisor
(e.g. VM Workstation)
is software that runs within an operating system environment. A 'guest' operating system thus runs at the third level above the hardware. Examples include VMware server and Microsoft Virtual Server."

virtualization: server consolidation

definition from Whatis.com:

DEFINITION - Server consolidation is an approach to the efficient usage of computer server resources in order to reduce the total number of servers or server locations that an organization requires. The practice developed in response to the problem of server sprawl, a situation in which multiple, under-utilized servers take up more space and consume more resources than can be justified by their workload.

According to Tony Iams, Senior Analyst at D.H. Brown Associates Inc. in Port Chester, NY, servers in many companies typically run at 15-20% of their capacity, which may not be a sustainable ratio in the current economic environment. Businesses are increasingly turning to server consolidation as one means of cutting unnecessary costs and maximizing return on investment (ROI) in the data center. Of 518 respondents in a Gartner Group research study, six percent had conducted a server consolidation project, 61% were currently conducting one, and 28% were planning to do so in the immediate future.

Although consolidation can substantially increase the efficient use of server resources, it may also result in complex configurations of data, applications, and servers that can be confusing for the average user to contend with. To alleviate this problem, server virtualization may be used to mask the details of server resources from users while optimizing resource sharing. Another approach to server consolidation is the use of blade servers to maximize the efficient use of space.

virtualization: server consolidation and containment

Server Consolidation and Containment:

Today's IT organizations face the costly management of server sprawl. This includes the hardware, maintenance and people resources needed to manage, operate and administer those servers on a daily basis. VMware server consolidation and containment solutions allow enterprises to enable workload isolation and granular resource control for all of the system's computing and I/O resources.

Using virtual infrastructure to consolidate physical systems in the data center, enterprises experience:

* Lower total cost of ownership of servers
* Higher server utilization
* Increased operational efficiency