Tuesday, February 27, 2007

networking: vlans, vlan ids, vlan trunks


Networking FAQ: VLAN
Wikipedia

VLANs:
are independent logical LANs within the same physical network. They help in reducing the broadcast domain and aids in network administration by separating logical segments of a LAN (like company departments) that should not exchange data using a LAN (they still can exchange data by routing).

VLANs use medium to high range switches that enable software partitioning of the available ports, based on certain criteria. This set of ports is called a Virtual LAN or is abbreviated to VLAN. As you can imagine, the switch fabric could forward the Ethernet frames to the ports belonging to the same VLAN, while it would prevent any communication among distinct Virtual LANs.

VLAN Trunks:
When a single switch is not sufficient for a company, but the LAN extends over a set of them, the need arises to create Virtual LANs on each and enable communication between them. The first solution could be to use a port dedicated to the uplink for each VLAN. This would however lead to waste in terms of ports and cables; if the Virtual LANs common to two switches are n you must use n uplink cables.

A better solution is to create a trunk or trunking: in other words, both switches are attributed a common port (trunk port) to all the VLANs that need to be transported. Such trunks must run between these "tagged ports" of VLAN-aware devices, so they are often switch-to-switch or switch-to-router links rather than links to hosts.The switches tag each packet outbound of the trunk with a VLAN ID and each packet entering via trunking is forwarded on the right VLAN based on the VLAN ID. It is obvious that the two switches must use the same trunking protocol to communicate correctly via the trunk. There are different types of these protocols, which are often proprietary, and this could lead to inter-operational problems among different brands of switch that use the Virtual LANs. Yet, the most used trunking protocol is IEEE 802.1Q. The latter, for each Ethernet frame exiting the trunk configured port, adds 4 bytes and only 12 bits of which are used to identify the VLAN. The VLAN ID is therefore between 1 and 4094, considering 0 and 4095 are reserved values.

Thursday, February 22, 2007

vmware: virtual center 2.0 - feature highlights

VMware Consolidated Backup
Consolidated Backup enables offloaded and impact-free backup for virtual machines running on an ESX Server system by allowing traditional file-based backup software to leverage VMware virtual machine snapshot technology and efficient SAN-based data transfer.

Remote CD/Floppy Access
Using either the Virtual Infrastructure Client or Virtual Infrastructure Web Access, a virtual machine can access a CD or floppy device from the client's machine.

VMware HA
VMware HA (High Availability) increases the availability of virtual machines by detecting host failures and automatically restarting virtual machines on other available hosts. HA operates on a set of ESX Server 3.0 hosts that have been grouped into a cluster with HA enabled.

VMware DRS
Distributed Resource Scheduling allows resources from all hosts in a cluster to be treated as a single, aggregated pool. When changes occur in the environment, DRS can tune the resource scheduling on individual hosts as well as use VMotion to rebalance workload across hosts in the cluster. When a virtual machine is powered on, DRS calculates the optimal host on which to start it, given current resource levels and the resource configuration of the new virtual machine.

Resource Pools
A resource pool provides a way of subdividing the resources of a stand-alone host or a cluster into smaller pools. A resource pool is configured with a set of CPU and memory resources that are shared by the virtual machines that run in the resource pool. Resource pools can be nested.

VMotion
- Virtual machine migrations while powered on (VMotion) are also all fully operational and enable migrations between two ESX Server 3.0 hosts or between two ESX Server 2.x hosts.
- Virtual machine migrations while powered off (cold migrations) are all fully operational and enable migrations between two ESX Server 3.x hosts or between ESX Server 3.x and ESX Server 2.x hosts.

esx: esx 3.0 feature highlights

  • NAS and iSCSI Support
    ESX Server 2.x could store virtual machines only on SCSI disks and on Fibre Channel SANs. ESX Server 3.0 can store virtual machines on NAS and iSCSI. iSCSI LUNs, like Fibre Channel LUNs, can be formatted with the VMware file system (VMFS). Each virtual machine resides in a single directory. Network attached storage (NAS) appliances must present file systems over the NFS protocol for ESX Server to be able to use them. NFS mounts are used like VMFS with ESX Server creating one directory for each virtual machine.

    Four-way Virtual SMP and 16 GB Memory Available to Guest Operating Systems
    Virtual machines can now have up to 4 processors (up from 2) and 16 GB of RAM (up from 3.6 GB) allocated to them.

  • ESX Server Clusters
    VirtualCenter 2.x introduces the notion of a cluster of ESX Server hosts. A cluster is a collection of hosts that can be managed as a single entity. The resources from all the hosts in a cluster are aggregated into a single pool. A cluster looks like a stand-alone host, but it typically has more resources available.

  • 64-Bit OS Virtual Machines
    64-bit guest operating systems are experimentally supported and visible in the Virtual Infrastructure Client interface, with full support available in future VI3 releases.

  • Hot-Add Virtual Disk
    ESX Server 3.0 supports adding new virtual disks to a virtual machine while it is running. This is useful with guest operating systems capable of recognizing hot-add hardware.

  • VMFS 3
    There is a new generation of VMFS in ESX Server 3.0. Scalability, performance, and reliability have all improved. Furthermore, subdirectories are now supported. ESX Server system will create a directory for each virtual machine and all its component files.

  • Potential Scalability Bottlenecks Have Been Removed
    In ESX Server 2.x, one vmx process per running virtual machine ran in the service console to implement certain virtual machine functionality. In ESX Server 3.x, these processes are no longer bound to the service console but instead are distributed across a server's physical CPUs.


  • New Guest SDK Available
  • The VMware Guest SDK allows software running in the guest operating system in a VMware ESX Server 3.0 virtual machine to collect certain data about the state and performance of the virtual machine. Download the Guest SDK package from www.vmware.com/support/developer/
  • VMware Tools Auto-Upgrade
  • VMware Infrastructure 3 (ESX Server 3.0/VirtualCenter 2.0) supports the ability to install or upgrade VMware Tools on multiple virtual machines at the same time without needing to interact with each virtual machine. Detailed instructions are provided in the Installation and Upgrade Guide.

esx: vmfs 2 overview

(source: vmware)



While conventional file systems allow only one server to have read-write access to the same file at a given time, VMFS is a cluster file system that leverages shared storage to allow multiple instances of ESX Server

to read and write to the same storage, concurrently.
VMFS allows :



  • Easier VM management: Greatly simplify virtual machine provisioning and administration by efficiently

    storing the entire virtual machine state in a central location.
  • Live Migration of VMS: Support unique virtualization-based capabilities such as live migration of running virtual machines from one physical server to another, automatic restart of a failed virtual machine on a separate physical server, and clustering virtual machines across different physical servers.
  • Performance: Get virtual disk performance close to native SCSI for even the most data-intensive applications with dynamic control of virtual storage volumes.
















  • Concurrent access to storage: Enable multiple installations of ESX Server to read and write from the same storage location concurrently.
  • Dynamic ESX Server Modification: Add or delete an ESX Server from a VMware VMFS volume without disrupting other ESX Server hosts.
  • VMFS volume resizing on the fly: Create

    new virtual machines without relying on a storage administrator.

    Adaptive block sizing and addressing for growing files allows you to

    increase a VMFS volume on the fly.
  • Automatic LUN mapping: Simplify storage management with automatic discovery and mapping of LUNs to a VMware VMFS volume.
  • I/O parameter tweaking: Optimize your virtual machine I/O with adjustable volume, disk, file and block sizes.
  • Failure recovery: Recover virtual machines faster and more reliably in the event of server failure with Distributed journaling.

VMware VMFS cluster file system is included in VMware Infrastructure

Enterprise and Standard Editions and is available for local storage

only with the Starter edition.


vmware: virtual infrastructure 3 overview

VMware Infrastructure 3 is a feature-rich suite that delivers the production-proven efficiency, availability, and

dynamic management needed to create a responsive data center. The suite includes:









esx: the vmfs file system

VMFS - Wikipedia: "VMFS is VMware's SAN file system. (Other examples of SAN file systems are Global File System, Oracle Cluster File System). VMFS is used solely in the VMware flagship server product, the ESX. It was developed and is used to store virtual machine disk images, including snapshots.


- Multiple servers can read/write the same filesystem simultaneously
, while
- Individual virtual machine files are locked
- VMFS volumes can be logically 'grown' (nondestructively increased in size) by spanning multiple VMFS volumes together.

There are three versions of VMFS: VMFS1, VMFS2 and VMFS3.

* VMFS1 was used by the ESX 1.x which is not sold anymore. It didn't feature the cluster filesystem properties and was used only by a single server at a time. VMFS1 is a flatfilesystem with no directory structure.

* VMFS2 is used by ESX 2.x, 2.5.x and ESX 3.x. While ESX 3.x can read from VMFS2, it will not mount it for writing. VMFS2 is a flatfilesystem with no directory structure.

* VMFS3 is used by ESX 3.x. As a most noticeable feature, it introduced directory structure in the filesystem. Older versions of ESX can't read or write on VMFS3 volumes. Beginning from ESX 3 and VMFS3, also virtual machine configuration files are stored in the VMFS partition by default."

vmware: known issues

Source: VMware - Wikipedia

Hardware support

issues:

  • VMware virtual machines do not support FireWire.
  • VMware virtual machines provide no direct USB 2.0 support, but make USB 2.0 devices in the host operating-system visible to the guest operating-system as USB 1.1 devices. (VMware Workstation 6.0 will offer USB 2.0 support.)
  • VMware virtual machines provide only experimental support for 3D hardware acceleration.
  • From ESX 2.x to ESX 3.x it is no longer possible to connect/disconnect the CD-ROM through the console session.


vmware: known issues with newer linux kernels

Source: VMware - Wikipedia:

"Older versions of VMware seem unable to run newer versions of Linux (kernel 2.4 series seem to panic when run on VMware 2.x; and 2.6 series kernels, when run on VMware 3.x, give a protection error). VMware Workstation has now (as of Nov 2006) reached version 5.5.3, which supports these newer operating systems and kernels. However, the latest versions of the 2.6.x kernel require a patch to use all the VMware features — even when using VMware Workstation 5.0 or 5.5. This patch, freely available as vmware-any-any-updatexxx (as of 29 January 2007: update107), comes via the Czech Technical University."

LUN: Logical Unit Number

In computer storage, a logical unit number or LUN is an address for an individual disk drive and by extension, the disk device itself. The term originated in the SCSI protocol as a way to differentiate individual disk drives within a common SCSI target device like a disk array. The term has become common in storage area networks (SAN) and other enterprise storage fields. Today, LUNs are normally not entire disk drives but rather virtual partitions (or volumes)of a RAID set.

Nomenclature: In SCSI, LUNs are addressed in conjunction with the controller ID of the host bus adapter, the target ID of the storage array, and an optional (and no longer common) slice ID. In the UNIX family of operating systems, these IDs are often combined into a single "word". For example, "c1t2d3s4" would refer to controller 1, target 2, disk 3, slice 4.


(Logical Unit Number) is an identification scheme for storage disks that typically supports a small number of units addressed as LUN 0 through 7, 15 or 31 depending on the technology. For example, Fibre Channel supports 32 addresses (0-31). A LUN may refer to a single disk, a subset of a single disk or an array of disks. Derived from the SCSI bus technology, each SCSI ID address can be further subdivided into LUNs 0 through 15 for disk arrays and libraries. See SCSI.


Logical Unit Number Masking or LUN masking is an authorization process that makes a LUN available to some hosts and unavailable to other hosts.The security benefits are limited in that with many HBAs (i.e. say, SCSI cards) it is possible to forge source addresses (WWNs/MACs/IPs). However, it is mainly implemented not as a security measure per se, but rather as protection against misbehaving servers from corrupting disks belonging to other servers. For example, Windows servers attached to a SAN will under some conditions corrupt non-Windows (Unix, Linux, NetWare) volumes on the SAN by attempting to write Windows volume labels to them. By hiding the other LUNs from the Windows server, this can be prevented, since the Windows server does not even realise the other LUNs exist.

os: threads, fibers and processes



Thread (computer science) - Wikipedia, the free encyclopedia

Threads compared with processes

Threads are distinguished from traditional multi-tasking operating system processes in that processes are typically independent, carry considerable state information, have separate address spaces, and interact only through system-provided inter-process communication mechanisms. Multiple threads, on the other hand, typically share the state information of a single process, and share memory and other resources directly. Context switching between threads in the same process is typically faster than context switching between processes. Systems like Windows NT and OS/2 are said to have "cheap" threads and "expensive" processes; in other operating systems there is not so great a difference.



Multithreading is a popular programming and execution model that allows multiple threads to exist within the context of a single process, sharing the process' resources but able to execute independently. The threaded programming model provides developers with a useful abstraction of concurrent execution. However, perhaps the most interesting application of the technology is when it is applied to a single process to enable parallel execution on a multiprocessor system.

This advantage of a multi-threaded program allows it to operate faster on computer systems that have multiple CPUs, CPUs with multiple cores, or across a cluster of machines. This is because the threads of the program naturally lend themselves to truly concurrent execution.



A process is the "heaviest" unit of kernel scheduling.
Processes own resources allocated by the operating system. Resources
include memory, file handles, sockets, device handles, and windows.
Processes do not share address spaces or file resources except through
explicit methods such as inheriting file handles or shared memory
segments, or mapping the same file in a shared way. Processes are
typically pre-emptively multitasked. However, Windows 3.1 and older
versions of Mac OS used co-operative or non-preemptive multitasking.


A thread is the "lightest" unit of kernel scheduling. At
least one thread exists within each process. If multiple threads can
exist within a process, then they share the same memory and file
resources. Threads are pre-emptively multitasked if the operating
system's process scheduler is pre-emptive. Threads do not own resources
except for a stack and a copy of the registers including the program counter.

A fiber is a "user thread." In some situations, there is a distinction between "kernel threads" and
"user threads" -- the former are managed and scheduled by the kernel,
whereas the latter are managed and scheduled in userspace. In this
article, the term "thread" is used to refer to kernel threads, whereas
"fiber" is used to refer to user threads. Fibers are co-operatively scheduled:
a running fiber must explicitly "yield" to allow another fiber to run.
A fiber can be scheduled to run in any thread in the same process.




powered by performancing firefox

Wednesday, February 21, 2007

linux basics: /etc/sysconfig/network

The etc/sysconfig/network file is used to specify information about the desired network configuration. The following values may be used:
  • NETWORKING=, where is one of the following boolean values:
    • yes — Networking should be configured.
    • no — Networking should not be configured.
  • HOSTNAME=, where should be the Fully Qualified Domain Name (FQDN), such as hostname.domain.com, but can be whatever hostname you want.
    Note Note
    For compatibility with older software that people might install (such as trn), the /etc/HOSTNAME file should contain the same value as here.
  • GATEWAY=, where is the IP address of the network's gateway.
  • GATEWAYDEV=, where is the gateway device, such as eth0.
  • NISDOMAIN=, where is the NIS domain name.
  • FORWARD_IPV4=answer, where the answer is yes or no. This decides whether to perform IP forwarding or not to perform IP forwarding.

virtualization: types of hypervisors

From the Wikipedia Definition:

"In computing, a hypervisor (also: virtual machine monitor) is a virtualization platform that allows multiple operating systems to run on a host computer at the same time. The term usually refers to an implementation using full virtualization. Hypervisors are currently classified in two types:

* Type 1 hypervisor
(e.g. ESX Server)
is software that runs directly on a given hardware platform (as an operating system control program). A 'guest' operating system thus runs at the second level above the hardware. The classic type 1 hypervisor was CP/CMS, developed at IBM in the 1960s, ancestor of IBM's current z/VM. More recent examples are Xen, VMware's ESX Server, and Sun's Hypervisor (released in 2005).

* Type 2 hypervisor
(e.g. VM Workstation)
is software that runs within an operating system environment. A 'guest' operating system thus runs at the third level above the hardware. Examples include VMware server and Microsoft Virtual Server."

virtualization: server consolidation

definition from Whatis.com:

DEFINITION - Server consolidation is an approach to the efficient usage of computer server resources in order to reduce the total number of servers or server locations that an organization requires. The practice developed in response to the problem of server sprawl, a situation in which multiple, under-utilized servers take up more space and consume more resources than can be justified by their workload.

According to Tony Iams, Senior Analyst at D.H. Brown Associates Inc. in Port Chester, NY, servers in many companies typically run at 15-20% of their capacity, which may not be a sustainable ratio in the current economic environment. Businesses are increasingly turning to server consolidation as one means of cutting unnecessary costs and maximizing return on investment (ROI) in the data center. Of 518 respondents in a Gartner Group research study, six percent had conducted a server consolidation project, 61% were currently conducting one, and 28% were planning to do so in the immediate future.

Although consolidation can substantially increase the efficient use of server resources, it may also result in complex configurations of data, applications, and servers that can be confusing for the average user to contend with. To alleviate this problem, server virtualization may be used to mask the details of server resources from users while optimizing resource sharing. Another approach to server consolidation is the use of blade servers to maximize the efficient use of space.

os: processor affinity

Processor Affinity - Wikipedia:

"Processor affinity is a modification of the native central queue scheduling algorithm. Each task (be it process or thread) in the queue has a tag indicating its preferred / kin processor. At allocation time, each task is allocated to its kin processor in preference to others.

Processor affinity takes advantage of the fact that some remnants of a process may remain in one processor's state (in particular, in its cache) from the last time the process ran, and so scheduling it to run on the same processor the next time could result in the process running more efficiently than if it were to run on another processor."

virtualization: server consolidation and containment

Server Consolidation and Containment:

Today's IT organizations face the costly management of server sprawl. This includes the hardware, maintenance and people resources needed to manage, operate and administer those servers on a daily basis. VMware server consolidation and containment solutions allow enterprises to enable workload isolation and granular resource control for all of the system's computing and I/O resources.

Using virtual infrastructure to consolidate physical systems in the data center, enterprises experience:

* Lower total cost of ownership of servers
* Higher server utilization
* Increased operational efficiency

linux: grub vs lilo

GNU GRUB 2:

Here are some of GRUB 2's features that make it more attractive than LILO (imo):

* Scripting support, such as conditionals, loops, variables and functions.
* Graphical interface.
* Dynamic loading of modules in order to extend itself at the run time rather than at the build time.
* Portability for various architectures.
* Internationalization. This includes support for non-ASCII character code, message catalogs like gettext, fonts, graphics console, and so on.
* Real memory management, to make GNU GRUB more extensible.
* Modular, hierarchical, object-oriented framework for file systems, files, devices, drives, terminals, commands, partition tables and OS loaders.
* Cross-platform installation which allows for installing GRUB from a different architecture.
* Rescue mode saves unbootable cases. Stage 1.5 was eliminated.
* Fix design mistakes in GRUB Legacy, which could not be solved for backward-compatibility, such as the way of numbering partitions."



GRUB (Legacy) vs. LILO

As stated at the start of this article, all boot loaders work in a similar way to fulfill a common purpose. But LILO and GRUB do have a number of differences:

  • LILO has no interactive command interface, whereas GRUB does.
  • LILO does not support booting from a network, whereas GRUB does.
  • LILO stores information regarding the location of the operating systems it can to load physically on the MBR. If you change your LILO config file, you have to rewrite the LILO stage one boot loader to the MBR. Compared with GRUB, this is a much more risky option since a misconfigured MBR could leave the system unbootable. With GRUB, if the configuration file is configured incorrectly, it will simply default to the GRUB command-line interface.