Quantcast
Channel: VMware ESXi – The Wiert Corner – irregular stream of stuff
Viewing all 250 articles
Browse latest View live

Using the partedUtil command line utility on ESXi and ESX (1036609) | VMware KB

$
0
0

Reminder to self when checking “new” disks to see what partitions they contain before formatting them as VMFS.

There is a truckload information at [WayBackUsing the partedUtil command line utility on ESXi and ESX (1036609) | VMware KB.

A few tips; example output is further below:

  • Disks are listed under /vmfs/devices/disks/ where there are two entries per device: a path leading to the device, and a link to that path which starts with vml. which I filter out with grep.
  • If a disk under under /vmfs/devices/disks/ ends with :# where # is a number, then it is a partition
  • Just skip partedUtil get as partedUtil getptblwill give you exactly the same information,
    • plus an extra initial line indicating what kind of partition table it is. KB 1036609 has a longer list, but these are the ones you usually see:
      • unknown: the disk has no partition table yet (usually), or the type of partition table cannot be determined (hardly)
      • gpt: there is a GUID Partition Table
      • msdos: there is a Master Boot Record partition table
    • on ESXi 6.x two extra columns listing the partition GUID and partition type description
  • The output of partedUtil is unformatted, which means it is easy to parse, but hard to read for humans. You can pipe through sed 's/ /\t/g' (as there is no tr on the ESXi busybox)

Some more background reading

On scripting:

  • The shell is sh (always been there)
  • There is Python (ESXi 5.1 has Python 2.7.8; ESXi 6.5 has Python 3.5.3; it has likely been available in earlier versions too).

On device names:

On errors:

    1. ~ # find /vmfs/devices/disks/ | grep T1500LM0032D9YH148
      /vmfs/devices/disks/t10.ATA_____ST1500LM0032D9YH148__________________________________Z110C4Q0a
      ~ # partedUtil getptbl /vmfs/devices/disks/t10.ATA_____ST1500LM0032D9YH148__________________________________Z110C4Q0
      unknown
      182401 255 63 2930277168
      ~ # 

I know of three VMFS types:

  • VMFS-3: Supported in ESXi 3.X, 4.X, 5.x & 6.x; deprecated as of 6.0 (cannot be created as of 6.0), has quite some limitations.
  • VMFS-4: got never released.
  • VMFS-5: Can be converted from VMFS-3
  • VMFS-6: Cannot be converted from other VMFS types

Some interesting links about the various VMFS types:

Busybox has been updated over time:

 

Examples and output

Example outputs on one of my systems, of which I stripped most of the disks as they’re not really relevant here.

[root@ESXi-X10SRH-CF:~] ls -1 /vmfs/devices/disks/ | grep -v '^vml\.'
naa.5000c50087762d1b
...
naa.600605b00aa054a0ff000021022683ae
naa.600605b00aa054a0ff000021022683ae:1
...
t10.ATA_____ST1500LM0032D9YH148__________________________________Z110C4Q0
t10.ATA_____Samsung_SSD_850_PRO_2TB_________________S2KMNCAGB04321L_____
t10.ATA_____Samsung_SSD_850_PRO_2TB_________________S2KMNCAGB04321L_____:1
...
t10.SanDisk00Ultra_Fit000000000000004C530001240406103372
t10.SanDisk00Ultra_Fit000000000000004C530001240406103372:1
t10.SanDisk00Ultra_Fit000000000000004C530001240406103372:5
t10.SanDisk00Ultra_Fit000000000000004C530001240406103372:6
t10.SanDisk00Ultra_Fit000000000000004C530001240406103372:7
t10.SanDisk00Ultra_Fit000000000000004C530001240406103372:8
t10.SanDisk00Ultra_Fit000000000000004C530001240406103372:9
[root@ESXi-X10SRH-CF:~] partedUtil getptbl /vmfs/devices/disks/t10.ATA_____ST1500LM0032D9YH148__________________________________Z110C4Q0 
unknown
182401 255 63 2930277168
[root@ESXi-X10SRH-CF:~] partedUtil get /vmfs/devices/disks/t10.ATA_____ST1500LM0032D9YH148__________________________________Z110C4Q0 
182401 255 63 2930277168
[root@ESXi-X10SRH-CF:~] partedUtil get /vmfs/devices/disks/t10.ATA_____SAMSUNG_MZHPV512HDGL2D00000______________S1X1NYAGB09589______
62260 255 63 1000215216
1 2048 1000214527 0 0
[root@ESXi-X10SRH-CF:~] partedUtil getptbl /vmfs/devices/disks/t10.ATA_____SAMSUNG_MZHPV512HDGL2D00000______________S1X1NYAGB09589______
gpt
62260 255 63 1000215216
1 2048 1000214527 AA31E02A400F11DB9590000C2911D1B8 vmfs 0
[root@ESXi-X10SRH-CF:~] partedUtil getptbl /vmfs/devices/disks/t10.SanDisk00Ultra_Fit000000000000004C530001240406103372
gpt
3738 255 63 60062500
1 64 8191 C12A7328F81F11D2BA4B00A0C93EC93B systemPartition 128
5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
7 1032224 1257471 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0
8 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
9 1843200 7086079 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0
[root@ESXi-X10SRH-CF:~] partedUtil get /vmfs/devices/disks/t10.SanDisk00Ultra_Fit000000000000004C530001240406103372
3738 255 63 60062500
1 64 8191 0 128
5 8224 520191 0 0
6 520224 1032191 0 0
7 1032224 1257471 0 0
8 1257504 1843199 0 0
9 1843200 7086079 0 0
[root@ESXi-X10SRH-CF:~] partedUtil getptbl /vmfs/devices/disks/t10.SanDisk00Ultra_Fit000000000000004C530001240406103372 | sed 's/ /\t/g'
gpt
3738    255 63  60062500
1   64  8191    C12A7328F81F11D2BA4B00A0C93EC93B    systemPartition 128
5   8224    520191  EBD0A0A2B9E5443387C068B6B72699C7    linuxNative 0
6   520224  1032191 EBD0A0A2B9E5443387C068B6B72699C7    linuxNative 0
7   1032224 1257471 9D27538040AD11DBBF97000C2911D1B8    vmkDiagnostic   0
8   1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7    linuxNative 0
9   1843200 7086079 9D27538040AD11DBBF97000C2911D1B8    vmkDiagnostic   0
[root@ESXi-X10SRH-CF:~] partedUtil get /vmfs/devices/disks/t10.SanDisk00Ultra_Fit000000000000004C530001240406103372 | sed 's/ /\t/g'
3738    255 63  60062500
1   64  8191    0   128
5   8224    520191  0   0
6   520224  1032191 0   0
7   1032224 1257471 0   0
8   1257504 1843199 0   0
9   1843200 7086079 0   0
[root@ESXi-X10SRH-CF:~] partedUtil --help
Usage: 
 Get Partitions : get  
 Set Partitions : set  ["partNum startSector endSector type attr"]* 
 Delete Partition : delete  
 Resize Partition : resize    
 Get Partitions : getptbl  
 Set Partitions : setptbl   ["partNum startSector endSector type/guid attr"]* 
 Fix Partition Table : fix  
 Create New Label (all existing data will be lost): mklabel   
 Show commonly used partition type guids : showGuids 
 Get usable first and last sectors : getUsableSectors  
 Fix GPT Table interactively : fixGpt  

[root@ESXi-X10SRH-CF:~] partedUtil showGuids
 Partition Type       GUID
 vmfs                 AA31E02A400F11DB9590000C2911D1B8
 vmkDiagnostic        9D27538040AD11DBBF97000C2911D1B8
 vsan                 381CFCCC728811E092EE000C2911D0B2
 virsto               77719A0CA4A011E3A47E000C29745A24
 VMware Reserved      9198EFFC31C011DB8F78000C2911D1B8
 Basic Data           EBD0A0A2B9E5443387C068B6B72699C7
 Linux Swap           0657FD6DA4AB43C484E50933C84B4F4F
 Linux Lvm            E6D6D379F50744C2A23C238F2A3DF928
 Linux Raid           A19D880F05FC4D3BA006743F0F84911E
 Efi System           C12A7328F81F11D2BA4B00A0C93EC93B
 Microsoft Reserved   E3C9E3160B5C4DB8817DF92DF00215AE
 Unused Entry         00000000000000000000000000000000
[root@ESXi-X10SRH-CF:~] cat /local/bin/what-is-my-shell.sh 
if test -n "$ZSH_VERSION"; then
  PROFILE_SHELL=zsh
elif test -n "$BASH_VERSION"; then
  PROFILE_SHELL=bash
elif test -n "$KSH_VERSION"; then
  PROFILE_SHELL=ksh
elif test -n "$FCEDIT"; then
  PROFILE_SHELL=ksh
elif test -n "$PS3"; then
  PROFILE_SHELL=unknown
else
  PROFILE_SHELL=sh
fi
echo $PROFILE_SHELL
echo $SHELL
[root@ESXi-X10SRH-CF:~] /local/bin/what-is-my-shell.sh 
sh
/bin/sh
[root@ESXi-X10SRH-CF:~] python --version
Python 3.5.3
[root@ESXi-X10SRH-CF:~] 

–jeroen


ESXi 6.5: change the automatic startup/shutdown of guest VMs

$
0
0

One more article about differences between the old C# Windows vSphere Client and “new” vSphere HTML5 Web Client in ESXi 6.5 and up.

This time about changing the startup/shutdown sequence.

In the old C# Windows vSphere Client, this was at the host level in the configuration tab under Virtual Machine Startup/Shutdown. There you click on Properties, then adjust the order by moving them up and down (screenshots and more detailed instructions are at ESX(i) AutoStart virtual machines: how to change the VM startup/shutdown settings (via: VMware Communities)).

In the vSphere HTML5 Web Client, there are two bits for this:

On the server you need to enable AutoStart:

From:  to:

For each VM you have to enable AutoStart, then determine the order

  1. In the left, select the VM
  2. In the right, choose Actions, then Autostart, then Enable:
  3. Enable the columns in the VM overview:
  4. Order 1 means highest; adjust accordingly for each VM:

If after boot you get a “Failed – The operation is not allowed in the current state.“, then your machine still is in maintenance mode.

–jeroen

Related: ESXi 6.5: change the host name in the “new” vSphere HTML5 Web Client, or using DHCP option 12 « The Wiert Corner – irregular stream of stuff

VMware ESXi 6.5.0 Patch History

$
0
0

Interesting way to keep your ESXi rug up to date:

[WayBack] Keep track of VMware ESXi patches, subscribe by RSS, Twitter and E-Mail! – Brought to you by @VFrontDe: VMware ESXi 6.5.0 Patch History.

There is an RSS feed: http://feeds.feedburner.com/Esxi650PatchTracker and in depth information at [WayBackVMware ESXi Patch Tracker – Help.

Clicking on the link next to Imageprofile will popup a screen with instructions how to upgrade your ESXi box to that level, for instance:

# Cut and paste these commands into an ESXi shell to update your host with this Imageprofile
# See the Help page for more instructions
#
esxcli network firewall ruleset set -e true -r httpClient
esxcli software profile update -p ESXi-6.5.0-20170702001-standard \
-d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml
esxcli network firewall ruleset set -e false -r httpClient
#
# Reboot to complete the upgrade

I wish there was a similar thing for [WayBackESXi Embedded Host Client as I could not find any of the esxui VIB files mentioned there at esxui site:https://esxi-patches.v-front.de.

By the time you read this I really hope these two esxui things have been fixed:

–jeroen

ESXi 6.5.0a hang after “balloonVMCI loaded successfully”

$
0
0

No Google results for “balloonVMCI loaded successfully”.

I had this when booting from a USB installation of ESXi 6.5.0.a.

It got resolved with ESXi 6.5.0 Update 1. Apparently the first version has issues booting on a SuperMicro-X10SRH-CF from a USB stick.

It’s a bit tricky to get the accompanying VMware-VMvisor-Installer-6.5.0.update01-5969303.x86_64.iso as the my VMware site is a bit broken (even if you have the license, it says you are not entitled), but luckily the ESXi 6.5 update 1 download page [Cache/Archive.is] has the hashes:

MD5SUM: 6d71ca1a8c12d73ca75952f411d16dc7
SHA1SUM: 5a38ae10162e0a1395b12ea31cba6342796f6383
SHA256SUM: f6e5000dff423c275b3ffbdfe08145f369d04b8c4ade5a413f2ef2a029a5e3ef

You also need a good USB stick. If it’s not good enough, you get errors like “Host Local Swap Location has not been enabled” during boot**.

–jeroen

** full log at for instance [WayBack] 2017-02-03T03:00:01Z crond[66604]: crond: USER root pid 87677 cmd /usr/lib/vmwar – Pastebin.com

How to passthrough SATA drives directly on VMWare ESXI 6.5 as RDMs

ssh from Mac OS X to ESXi: “WARNING: terminal is not fully functional”

$
0
0

When connecting from my Mac to my ESXi rig, some commands (especially less) show this output:

WARNING: terminal is not fully functional

So I created this alias to connect from my Mac to the internal address of my ESXi rig:

alias ssh-esxi-X10SRH-CF-internal='TERM=xterm ssh -p 22 root@192.168.71.91'

The trick is the bold part: TERM=xterm (which you can also replace by export TERM=xterm; if you want future ssh sessions to use the same [wayback] TERM setting).

The reason is that the Mac defines the TERM variable as containing xterm-256 which is defined on the Mac itself, but ESXi has a hard time coping with it.

Some Mac OS and Xcode combinations had a problem with xterm-256 not being present ([WayBackmacos – Terminal strangeness after installing Xcode on Lion – Super User), but this isn’t the case on my system:

$ ls -alh `find /usr/share/terminfo | grep 'xterm-256color'`
-rw-r--r-- 1 root wheel 3.2K Jul 30 2016 /usr/share/terminfo/78/xterm-256color

On the Mac you really want to use xterm-256color as it looks way better than xterm-color or xterm: [WayBacklinux – What is the difference between xterm-color & xterm-256color? – Stack Overflow (thanks [WayBack] Chris Page!)

It seems I already did something similar on ESXi itself to get esxtop working: ESXi: when esxtop shows garbage. That was on the ESXi side and works as well for this problem too.

However, it is a bit harder to have a script run during ESXi boot time that sets this, so it is easier to fix this on the Mac side.

It works for all OS X and ESXi versions I’ve tested so far.

–jeroen

ESXi 6.5: mount a datastore that does not automount; esxcfg-volume to the rescue

$
0
0

I had a 1.5 TB SATA disk with VMFS5 created on ESXi 5.1 that would not want to mount on ESXi 6.5 automatically, not even after a rescan, or fresh boot, so I did this:

[root@ESXi-X10SRH-CF:~] esxcfg-volume --help
esxcfg-volume
-l|--list                                List all volumes which have been
                                         detected as snapshots/replicas.
-m|--mount              Mount a snapshot/replica volume, if 
                                         its original copy is not online.
-u|--umount             Umount a snapshot/replica volume.
-r|--resignature        Resignature a snapshot/replica volume.
-M|--persistent-mount   Mount a snapshot/replica volume
                                         persistently, if its original copy is
                                         not online.
-U|--upgrade            Upgrade a VMFS3 volume to VMFS5.
-h|--help                                Show this message.
[root@ESXi-X10SRH-CF:~] esxcfg-volume --list
Scanning for VMFS-3/VMFS-5 host activity (512 bytes/HB, 2048 HBs).
VMFS UUID/label: 59a5306c-a8793061-4a23-001f29022aed/ST1500LM0032D9YH148-backup
Can mount: Yes
Can resignature: Yes
Extent name: naa.5000c5002dba6642:1 range: 0 - 1430527 (MB)

Scanning for VMFS-3/VMFS-5 host activity (512 bytes/HB, 2048 HBs).
VMFS UUID/label: 532cd010-6e8c01d1-45be-001f29022aed/Raid6SATA
Can mount: Yes
Can resignature: Yes
Extent name: naa.600605b00aa054a0ff000021022683ae:1 range: 0 - 1830143 (MB)

[root@ESXi-X10SRH-CF:~] esxcfg-volume -m 532cd010-6e8c01d1-45be-001f29022aed/Raid6SATA
No matching volume 532cd010-6e8c01d1-45be-001f29022aed/Raid6SATA found!
[root@ESXi-X10SRH-CF:~] esxcfg-volume --mount 532cd010-6e8c01d1-45be-001f29022aed
Mounting volume 532cd010-6e8c01d1-45be-001f29022aed

Based on: [WayBackMount VMFS Datastore – via GUI or via CLI [Guide] – ESX Virtualization

–jeroen

“ESXi 6.5”“vSphere Web Client”“VMware Tools”– how to install/upgrade


Research list ESXi 6.5 and up vSphere Web Client: change Guest OS Version to the recommended one

$
0
0

There is a very odd thing in the “new” vSphere Web Client that’s mandatory as of ESXi 6.5: when you want to change the Guest OS Version to the recommended one, it’s not in the list.

Recommended:


“The configured guest OS (SUSE Linux Enterprise 11 (64-bit)) for this virtual machine does not match the guest that is currently running (SUSE openSUSE (64-bit)). You should specify the correct guest OS to allow for guest-specific optimizations.”

List:

Hopefully it is related to [WayBackESXi Embedded Host Client – Bugs: #12 Getting Warning that client OS does not match what is running.

–jeroen

Keeping your root visorfs clean: point the path to your own binaries stored on a vmfs volume

$
0
0

Some interesting commands derived from [WayBackESXi/ESX error: No free space left on device (1007638) | VMware KB:

  • finding large files:
    find / -path "/vmfs" -prune -o -type f -size +50000k -exec ls -lh '{}' \;
  • finding space on the root file system (which is not listed in df -h):
    stat -f /

This was in the process of trying to keep my local binaries out of [WayBackVisorFS: A Special-purpose File System for Efficient Handling of System Images – VMware Labs as it is inherently small in size (both total size and number of inodes) as it is a RAM disk based file system.

Based on that, at [WayBackTrouble shooting – esx.problem.visorfs.ramdisk.full – DefinIT I found this even more useful statement vdf -h | grep "%\|Ramdisk" which shows the exact usage of what’s in this filesystem. Example output on one of my systems:

# vdf -h | grep "%\|Ramdisk"
Ramdisk                   Size      Used Available Use% Mounted on
root                       32M        1M       30M   6% --
etc                        28M      184K       27M   0% --
opt                        32M        0B       32M   0% --
var                        48M      352K       47M   0% --
tmp                       256M        4K      255M   0% --
iofilters                  32M        0B       32M   0% --
hostdstats                678M        4M      673M   0% --

The easiest is not to store them in the root file system at all, but then you need to alter the default path:

# echo $PATH
/bin:/sbin

Since my local binaries are at /vmfs/volumes/Samsung512NVME/local-bin/, I wanted to persist this path change:

export PATH=$PATH:/vmfs/volumes/Samsung512NVME/local-bin/

Basically you can do this with any current directory on your system: export PATH=$PATH:`pwd`

The easiest way to persist that path is to ensure you can shoehorn the effect in a file that gets started during bootup.

The standard – but unsupported – way to do that is shown for instance by:

So, edit vi /etc/rc.local.d/local.sh, then shutdown all your VMs and reboot the system to verify the effects. However inserting that export isn’t enough. This is the line you need to add before the exit 0:

sed -i -e 's!PATH=/bin:/sbin!PATH=/bin:/sbin:/vmfs/volumes/Samsung512NVME/local-bin/!' /etc/profile

–jeroen

lamw/ghettoVCB: ghettoVCB

$
0
0

I found out that I had some very old draft notes below, but since then the source has moved to github: lamw/ghettoVCB: ghettoVCB.

Since I find VIB easier to use than the Offline Bundle (for differences see [WayBackVIB vs. Offline Bundle and [WayBack] VMware Front Experience: ESXi Community Packaging Tools) these are the VIB steps to get it installed:

  1. Download https://github.com/lamw/ghettoVCB/raw/master/vghetto-ghettoVCB.vib
  2. Put it in the /tmp directory on your ESXi box (using for instance FileZilla, WinSCP, SCP or other tools)
  3. Install it using esxcli software vib install -v /tmp/vghetto-ghettoVCB.vib -f

Then use it to make backups or restores as described at:

Note that contrary to the documentation, the config file has moved to /etc/ghettovcb/ghettoVCB.conf.

Because of Keeping your root visorfs clean: point the path to your own binaries stored on a vmfs volume I’m using a copy of that stored in my local-bin directory (which is backed-up by rsync to another disk) and a small ghettoVcb.sh bootstrap script referencing that config-file, so the backup command for one command now is this:

ghettoVcb.sh -m diaspore.opensuse-Tumbleweed-x64

or this for all VMs (about 2 hours from NVME SSD to HDD; will probably make this a 2 stage thing):

ghettoVcb.sh -a

VMs are backed-up under the directory specified in VM_BACKUP_VOLUME(below that’s ./) in a schema like this:

./diaspore.opensuse-Tumbleweed-x64
./diaspore.opensuse-Tumbleweed-x64/diaspore.opensuse-Tumbleweed-x64-2017-09-24_16-07-08
./diaspore.opensuse-Tumbleweed-x64/diaspore.opensuse-Tumbleweed-x64-2017-09-24_16-07-08/diaspore.opensuse-Tumbleweed-x64.vmx
./diaspore.opensuse-Tumbleweed-x64/diaspore.opensuse-Tumbleweed-x64-2017-09-24_16-07-08/diaspore.opensuse-Tumbleweed-x64-flat.vmdk
./diaspore.opensuse-Tumbleweed-x64/diaspore.opensuse-Tumbleweed-x64-2017-09-24_16-07-08/diaspore.opensuse-Tumbleweed-x64.vmdk
./diaspore.opensuse-Tumbleweed-x64/diaspore.opensuse-Tumbleweed-x64-2017-09-24_16-07-08/STATUS.ok

In the future, I might move to an NFS based back-up based on these links:

–jeroen

Very old notes:

 

–jeroen

bash – aliasing cd to pushd – is it a good idea? – Unix & Linux Stack Exchange

$
0
0

On my research list: [WayBackbash – aliasing cd to pushd – is it a good idea? – Unix & Linux Stack Exchange

It has a nice discussion on complements to pushd/popd/cd/dirs including a very nice set of navd scripts that eases the navigation of the directory stack.

I found it because the ESXi busybox does not have pushd and popd and a cd won’t work from inside a shell script: [WayBacklinux – Why doesn’t “cd” work in a bash shell script? – Stack Overflow

It also made me find out that the ESXi busybox does support cd - to go to the previous directory. More info on that cd syntax is at [WayBack] bash – Difference between “cd -” and “cd ~-” – Unix & Linux Stack Exchange

–jeroen

Show SCSI / HBA modules in ESXi 6.5 with file and version information

$
0
0

A small script I made: Show SCSI / HBA modules in ESXi 6.5 with file and version information:

MODULES=`esxcfg-scsidevs --hbas | awk 'FNR > 0 {print $2}'`
for MODULE in $MODULES ; do
    # echo "Probing $MODULE"
    vmkload_mod --showinfo $MODULE | grep 'file: \|Version'
done

The script is based on ideas from [WayBackDetermining Network/Storage firmware and driver version in ESXi 4.x and later (1027206) | VMware KB

It works in at least ESXi 6.5 where it shows this on one of my systems:

 input file: /usr/lib/vmware/vmkmod/lsi_msgpt3
 Version: 12.00.02.00-11vmw.650.0.0.4564106
 input file: /usr/lib/vmware/vmkmod/vmw_ahci
 Version: 1.0.0-39vmw.650.1.26.5969303
 input file: /usr/lib/vmware/vmkmod/vmw_ahci
 Version: 1.0.0-39vmw.650.1.26.5969303
 input file: /usr/lib/vmware/vmkmod/vmw_ahci
 Version: 1.0.0-39vmw.650.1.26.5969303
 input file: /usr/lib/vmware/vmkmod/lsi_mr3
 Version: 6.910.18.00-1vmw.650.0.0.4564106
 input file: /usr/lib/vmware/vmkmod/megaraid_sas
 Version: Version 6.603.55.00.2vmw, Build: 4564106, Interface: 9.2 Built on: Oct 26 2016
 input file: /usr/lib/vmware/vmkmod/vmkusb
 Version: 0.1-1vmw.650.1.26.5969303

–jeroen

My LSI adapters and ESXi 6.5

$
0
0

So I won’t forget:

Direct download links in September 2017:

[WayBack] How to upgrade LSI MegaRaid SAS controller firmware using FreeDOS – Teksupport.in

Notes:

LSI provider install (SIM-S, SIMS, CIM, WEBM):

  1. Download the latest version (at the time of writing VMW-ESX-5.5.0-lsiprovider-500.04.V0.66-0002-5751577.zip)
  2. Unzip into /tmp
  3. esxcli software vib install -f -v /tmp/vmware-esx-provider-lsiprovider.vib
  4. wait for the VIB install to complete
  5. suspend or shutdown all VMs
  6. reboot the ESXi machine
  7. esxcli system wbem set --enable true
  8. Browse to https://192.168.71.91/ui/#/host/monitor/hardware/storage to see if SIM-S is working

MegaRAID Storage Manager (MSM) operation notes

A few tricky things to get right:

  • waiting: MSM is unbelievably slow (starting on SSD takes 10 seconds; discovery 30; connecting to host 60 – without any indication something is happening; fetching host data another 60;
  • old MSM versions are unstable (especially 14.x and lower), so keep current
  • ensure the hosts file on both the ESXi and Windows side match (otherwise it won’t discover anything, or discover as 0.0.0.0)
  • enable promiscuous mode on your vSwitch
  • if all else fails, disable any firewalls then enable bit by bit to see where it went wrong

Great installation steps:

MegaCLI installs

  1. Download the latest version that has VMware support (at the time of writing 8-07-07_MegaCLI.zip)
  2. Unzip into /tmp
  3. esxcli software vib install -f -v /tmp/VmwareMN/vmware-esx-MegaCli-8.07.07.vib
  4. wait for the VIB install to complete

Now you can the command /opt/lsi/MegaCLI/MegaCli (yes the casing of these two is different!) but you must to it in that directory, or ensure the LD_LIBARY_PATH contains /opt/lsi/MegaCLI.

StorCLI installs

Based on [WayBackStorCLI unter VMware vSphere installieren – Thomas-Krenn-Wiki

  1. Download the latest version that has VMware support (at the time of writing 1.23.02_StorCLI.zip)
  2. Recursively uncompress the ZIP file into /tmp**
  3. esxcli software vib install -f -v /tmp/storcli_All_OS/Vmware-OP/vmware-esx-storcli-1.23.02.vib
  4. wait for the VIB install to complete

Now you can the command /opt/lsi/storcli/storcli but you must to it in that directory, or ensure the LD_LIBARY_PATH contains /opt/lsi/storcli.

Example:

execute-storcli.sh /cALL show all | grep 'Controller = \|Model = \|Serial Number = \|Firmware'a

The vib file in "Vmware-NDS/" folder works with native driver.
The vib file in "Vmware-MN/" folder works with VMKlinux driver.

So I did a bit more searching based on the files in the VMware directories and came up with this list:

  • storcli_All_OS/Vmware/storcli.zip
    • Looks like it targets ESXi 3.x and older
  • storcli_All_OS/Vmware-MN/vmware-esx-storcli-1.23.02.vib wit  storcli_All_OS/Vmware-MN/VMWARE_MN_Readme.txt
    • Targets the vmklinux drivers that are being phased out with ESXi 5.5 and up
  • storcli_All_OS/Vmware-OP/vmware-esx-storcli-1.23.02.vib with storcli_All_OS/Vmware-OP/VMWARE_MN_NDS_Readme.txt
    • Targets the New Driver architecture introduced with ESXi 5.5 and used more and more since then

Background reading:

** unzip doesn’t work:

# unzip -d /tmp/ 1.23.02_StorCLI.zip
Archive: 1.23.02_StorCLI.zip
inflating: 1.23.02_StorCLI.txt
unzip: short read

But a combination of 7za and unzip does work:

# 7za x -o/tmp/ 1.23.02_StorCLI.zip
7-Zip (a) [32] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21
p7zip Version 16.02 (locale=en_US.UTF-8,Utf16=on,HugeFiles=on,32 bits,20 CPUs Intel(R) Xeon(R) CPU E5-2630L v4 @ 1.80GHz (406F1),ASM,AES-NI)

Scanning the drive for archives:
1 file, 48778476 bytes (47 MiB)

Extracting archive: 1.23.02_StorCLI.zip
--
Path = 1.23.02_StorCLI.zip
Type = zip
Physical Size = 48778476

Everything is Ok

Files: 2
Size: 48928561
Compressed: 48778476

# unzip -d /tmp/ /tmp/storcli_All_OS.zip 
Archive:  /tmp/storcli_All_OS.zip
   creating: storcli_All_OS/
   creating: storcli_All_OS/EFI/
  inflating: storcli_All_OS/EFI/license.txt
   creating: storcli_All_OS/EFI/UDK/
  inflating: storcli_All_OS/EFI/UDK/license.txt
  inflating: storcli_All_OS/EFI/UDK/storcli.efi
   creating: storcli_All_OS/FreeBSD/
  inflating: storcli_All_OS/FreeBSD/FreeBSD_readme.txt
  inflating: storcli_All_OS/FreeBSD/license.txt
  inflating: storcli_All_OS/FreeBSD/storcli.tar
  inflating: storcli_All_OS/FreeBSD/storcli64.tar
   creating: storcli_All_OS/Linux/
  inflating: storcli_All_OS/Linux/license.txt
  inflating: storcli_All_OS/Linux/LINUX_Readme.txt
  inflating: storcli_All_OS/Linux/storcli-1.23.02-1.noarch.rpm
   creating: storcli_All_OS/Linux-OEL-Sparc/
  inflating: storcli_All_OS/Linux-OEL-Sparc/license_OELSparc.txt
  inflating: storcli_All_OS/Linux-OEL-Sparc/storcli64-1.23.02-1.sparc64.rpm
   creating: storcli_All_OS/Linux-ppc/
   creating: storcli_All_OS/Linux-ppc/Big Endian/
  inflating: storcli_All_OS/Linux-ppc/Big Endian/license.txt
  inflating: storcli_All_OS/Linux-ppc/Big Endian/storcli.tar
   creating: storcli_All_OS/Linux-ppc/Little Endian/
  inflating: storcli_All_OS/Linux-ppc/Little Endian/license.txt
  inflating: storcli_All_OS/Linux-ppc/Little Endian/Readme.txt
  inflating: storcli_All_OS/Linux-ppc/Little Endian/storcli64_1.23.02_ppc64el.deb
   creating: storcli_All_OS/Solaris/
  inflating: storcli_All_OS/Solaris/license.txt
  inflating: storcli_All_OS/Solaris/SOLARIS_Readme.txt
  inflating: storcli_All_OS/Solaris/storcli.pkg
   creating: storcli_All_OS/Solaris Sparc/
  inflating: storcli_All_OS/Solaris Sparc/license.txt
  inflating: storcli_All_OS/Solaris Sparc/storcli.pkg
   creating: storcli_All_OS/Ubuntu/
  inflating: storcli_All_OS/Ubuntu/read_me.txt
  inflating: storcli_All_OS/Ubuntu/storcli_1.23.02_all.deb
   creating: storcli_All_OS/Vmware/
  inflating: storcli_All_OS/Vmware/license.txt
   creating: storcli_All_OS/Vmware/Linux/
  inflating: storcli_All_OS/Vmware/Linux/storcliKL-1.23.02-1.noarch.rpm
  inflating: storcli_All_OS/Vmware/Rel_read_me.txt.txt
  inflating: storcli_All_OS/Vmware/storcli.zip
   creating: storcli_All_OS/Vmware/Windows/
  inflating: storcli_All_OS/Vmware/Windows/StorCLIKL.zip
   creating: storcli_All_OS/Vmware-MN/
  inflating: storcli_All_OS/Vmware-MN/license.txt
  inflating: storcli_All_OS/Vmware-MN/vmware-esx-storcli-1.23.02.vib
  inflating: storcli_All_OS/Vmware-MN/VMWARE_MN_Readme.txt
   creating: storcli_All_OS/Vmware-OP/
  inflating: storcli_All_OS/Vmware-OP/license.txt
  inflating: storcli_All_OS/Vmware-OP/vmware-esx-storcli-1.23.02.vib
  inflating: storcli_All_OS/Vmware-OP/VMWARE_MN_NDS_Readme.txt
   creating: storcli_All_OS/Windows/
  inflating: storcli_All_OS/Windows/license.txt
  inflating: storcli_All_OS/Windows/storcli.exe
  inflating: storcli_All_OS/Windows/storcli64.exe
  inflating: storcli_All_OS/Windows/WIN_ReadMe.txt

9260-8i firmware update

  1. Download the latest firmware (at the time of writing 12.15.0-0239_MR_2108_SAS_FW_2.130.403-4660.zip) into /tmp
  2. unzip -d /tmp/ /tmp/12.15.0-0239_MR_2108_SAS_FW_2.130.403-4660.zip
  3. Find out the controller number
  4. Where 0 is the controller number, execute/opt/lsi/storcli/storcli /c0 download file=/tmp/mr2108fw.rom
  5. Wait for the firmware update to complete
  6. Suspend or shutdown all VMs
  7. Reboot

–jeroen

Trying to temporarily lower the ESXi acceptance level when installing VIBs

$
0
0

This is an interesting question at [WayBackHow can I lower the ESXI acceptance level when a forced install has been done? – Server Fault.

The default level on ESXi 6.5 is this:

# esxcli software acceptance get
PartnerSupported

Since I had ghettoVCB installed with the -f option, installing any PartnerSupported VIB would result in this error:

 [DependencyError]
 VIB virtuallyGhetto_bootbank_ghettoVCB_1.0.0-0.0.0 violates extensibility rule checks: ['(line 24: col 0) Element vib failed to validate content']
 VIB virtuallyGhetto_bootbank_ghettoVCB_1.0.0-0.0.0's acceptance level is community, which is not compliant with the ImageProfile acceptance level partner
 To change the host acceptance level, use the 'esxcli software acceptance set' command.
 Please refer to the log file for more details.

This fails:

# esxcli software acceptance set --level=CommunitySupported
[AcceptanceConfigError]
Unable to set acceptance level of community due to installed VIBs virtuallyGhetto_bootbank_ghettoVCB_1.0.0-0.0.0 having a lower acceptance level.
Please refer to the log file for more details.

The workaround is to uninstall virtuallyGhetto_bootbank_ghettoVCB_1.0.0-0.0.0, then install thePartnerSupportedVIB, then re-install ghettoVCB with the --force option or with lowered acceptance level:

  1. Remove the ghettoVCB installation: esxcli software vib remove -n ghettoVCB
  2. Perform the steps that ghettoVBC install prevented (install a non-community VIB, upgrade your ESXi system, etc)
  3. Reinstall the ghettoVCB by either:

–jeroen


ESXi: console commands to digging through your hba/disk/datastore configuration

$
0
0

Two posts with interesting commands to help digging through your hba/disk/datastore configurations from the console:

One day I will write a script that – per datastore – lists all the devices related to it including their HBA and LUN.

For that, I will likely need these references:

For now this works:

  • Get the list of data stores (note the Device Name column has the NAA_ID you need below):
    esxcli storage vmfs extent list
  • Get the path information to find HBA, Channel, Target and LUN:
    esxcli storage core path list --device NAA_ID
  • Get the list of HBAs:
    esxcli storage core adapter list
  • Get device details (including Model and Revision):
    esxcli storage core device list --device NAA_ID

The example below (with most important output bolded) shows a drive connected to a SAS3008 based controller which storcli cannot access (nor MegaCli), but MegaRAID Storage Manager (MSM) can.

MSM allowed me to find the serial number of the drive by the Target Transport Details value 4433221106000000 as being on Slot number 6 (which seems to indicate Target numbers are 1-based whereas LUN is 0-based).

# esxcli storage vmfs extent list
Volume Name                     VMFS UUID                            Extent Number  Device Name                                                                 Partition
------------------------------  -----------------------------------  -------------  --------------------------------------------------------------------------  ---------
...
ST6000VX0001-1SH                59a33f7b-66df7c00-11b0-0cc47aaa9742              0  naa.5000c50087762d1b                                                                1
# esxcli storage core path list -d naa.5000c50087762d1b 
sas.500304801ce1d700-sas.4433221106000000-naa.5000c50087762d1b
   UID: sas.500304801ce1d700-sas.4433221106000000-naa.5000c50087762d1b
   Runtime Name: vmhba0:C0:T7:L0
   Device: naa.5000c50087762d1b
   Device Display Name: Local ATA Disk (naa.5000c50087762d1b)
   Adapter: vmhba0
   Channel: 0
   Target: 7
   LUN: 0
   Plugin: NMP
   State: active
   Transport: sas
   Adapter Identifier: sas.500304801ce1d700
   Target Identifier: sas.4433221106000000
   Adapter Transport Details: 500304801ce1d700
   Target Transport Details: 4433221106000000
   Maximum IO Size: 4194304
# esxcli storage core adapter list
HBA Name  Driver        Link State  UID                   Capabilities  Description                                                           
--------  ------------  ----------  --------------------  ------------  ----------------------------------------------------------------------
vmhba0    lsi_msgpt3    link-n/a    sas.500304801ce1d700                (0000:01:00.0) Avago (LSI Logic) Fusion-MPT 12GSAS SAS3008 PCI-Express
...
vmhba32   vmkusb        link-n/a    usb.vmhba32                         () USB  
# esxcli storage core device list --device naa.5000c50087762d1b 
naa.5000c50087762d1b
   Display Name: Local ATA Disk (naa.5000c50087762d1b)
   Has Settable Display Name: true
   Size: 5723166
   Device Type: Direct-Access 
   Multipath Plugin: NMP
   Devfs Path: /vmfs/devices/disks/naa.5000c50087762d1b
   Vendor: ATA     
   Model: ST6000VX0001-1SH
   Revision: VN02
   SCSI Level: 6
   Is Pseudo: false
   Status: on
   Is RDM Capable: true
   Is Local: true
   Is Removable: false
   Is SSD: false
   Is VVOL PE: false
   Is Offline: false
   Is Perennially Reserved: false
   Queue Full Sample Size: 0
   Queue Full Threshold: 0
   Thin Provisioning Status: unknown
   Attached Filters: 
   VAAI Status: unsupported
   Other UIDs: vml.02000000005000c50087762d1b535436303030
   Is Shared Clusterwide: false
   Is Local SAS Device: true
   Is SAS: true
   Is USB: false
   Is Boot USB Device: false
   Is Boot Device: false
   Device Max Queue Depth: 32
   No of outstanding IOs with competing worlds: 32
   Drive Type: physical
   RAID Level: NA
   Number of Physical Drives: 1
   Protection Enabled: false
   PI Activated: false
   PI Type: 0
   PI Protection Mask: NO PROTECTION
   Supported Guard Types: NO GUARD SUPPORT
   DIX Enabled: false
   DIX Guard Type: NO GUARD SUPPORT
   Emulated DIX/DIF Enabled: false

–jeroen

VMFS metadata files

$
0
0

For my own ference:

disk space under VMFS-3 is organized according to four resource types. They are : blocks, sub-blocks, pointer blocks, and file descriptors. Resources are grouped into clusters, which form cluster groups. Every resource type is administered by one or a number of system files. Lets have a look at what those abbreviated file names stand for:

  • fbb.sf = file block bitmap.sf
  • fdc.sf = file descriptor cluster.sf
  • pbc.sf = pointer block cluster.sf
  • sbc.sf = sub-block cluster.sf
  • vh.sf = volume header.sfs
  • dd.sf = scsi device description.sf

The VMFS-5 uses one more system file:

  • pb2.sf = pointer block 2.sf

Source: [Archive.isVMFS metadata files

Some wizardry: vmkfstools | virtualhobbit

$
0
0

Some wizardry: [WayBackvmkfstools | virtualhobbit.

This includes:

  • finding which VMFS partitions are there the hard way
  • initialising partitions from known good data
  • vmkfstools -V (yes, capital V is for VMFS rescan, as lowercase v is for verbose)

Found after reading [WayBackDatastore not mounted after reboot of ESXi5.5 |VMware Communities

Then found this:

That solved my problem!

# esxcfg-volume --list
Scanning for VMFS-3/VMFS-5 host activity (512 bytes/HB, 2048 HBs).
VMFS UUID/label: 532cd010-6e8c01d1-45be-001f29022aed/Raid6SSD
Can mount: Yes
Can resignature: Yes
Extent name: naa.600605b00aa054a0ff000021022683ae:1 range: 0 - 1830143 (MB)
# esxcfg-volume --mount 532cd010-6e8c01d1-45be-001f29022aed
Mounting volume volume 532cd010-6e8c01d1-45be-001f29022aed

And there it was:

# df -h
Filesystem   Size   Used Available Use% Mounted on
...
VMFS-5       1.7T   1.6T    169.6G  91% /vmfs/volumes/Raid6SSD
...

Note you can mount non-persistent (--mount) or persistent (--persistent-mount) by both UUID and label, so there are four choices for mounting:

esxcfg-volume --mount UUID
esxcfg-volume --mount label
esxcfg-volume --persistent-mount UUID
esxcfg-volume --persistent-mount label

–jeroen

Some links and notes on ESXi and virtualised NAS systems

$
0
0

For my own memory:

[WayBack] Best Hard Drives for ZFS Server (Updated 2017) | b3n.org

Best Buy Guides (BBGs) – mux’ blog – Tweakblogs – Tweakers « The Wiert Corner – irregular stream of stuff

ZFS, dedupe and RAM:

ZFS, FreeBSD, ZoL (ZFS on Linux) and SSDs:

OpenSuSE related

Samba/CIFS related

–jeroen

Determining the ESXi installation type (2014558) | VMware KB

$
0
0

Via [WayBackDetermining the ESXi installation type (2014558) | VMware KB

# esxcfg-info -e
boot type: visor-usb

That’s on my X10SRH-CF system which runs from USB.

Values you can get:

  • visor-pxe indicates a PXE deployment
  • visor-thin indicates an installable deployment
  • visor-usb indicates an embedded deployment

If your installation is visor-thin based (running from hard-disk), then you can convert it to visor-usb; the steps are at [WayBackvisor-thin & vsantraces – Hypervisor.fr (in French, but Google Translate is quite OK). It skips a few of the steps mentioned in [WayBack] How To Backup & Restore Free ESXi Host Configuration | virtuallyGhetto, so for saving your current config it’s best to follow these steps:

  1. Shutdown or suspend all VMs
  2. vim-cmd hostsvc/firmware/sync_config
  3. vim-cmd hostsvc/firmware/backup_config
  4. Copy the generated backup from /scratch/downloads (a UUID directory under it)to a safe location
  5. vim-cmd hostsvc/maintenance_mode_enter
  6. shutdown
  7. Install the same ESXi version on a USB disk
  8. Boot from the USB disk
  9. copy the backup to /tmp/configBundle.tgz
  10. vim-cmd hostsvc/firmware/restore_config /tmp/configBundle.tgz
  11. reboot

–jeroen

via [WayBackHow to tell if ESXi is installed to SD card or local HDD? : vmware

Viewing all 250 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>