Tag: esxi

Install VMware vSphere 7.0 on Proxmox

Install VMware vSphere 7.0 on Proxmox

Verify

root@proxmox:~# cat /sys/module/kvm_intel/parameters/nested
Y

Enable

Intel CPU

echo "options kvm-intel nested=Y" > /etc/modprobe.d/kvm-intel.conf

AMD CPU

echo "options kvm-amd nested=1" > /etc/modprobe.d/kvm-amd.conf

Install Module

modprobe -r kvm_intel
modprobe kvm_intel

Note: more info, check https://pve.proxmox.com/wiki/Nested_Virtualization

Install

ISO

Download ISO, such as VMware-VMvisor-Installer-7.0U2a-17867351.x86_64.iso

VM Configure

  • General Tab

    • Name:
  • OS Tab

    • Type: Linux
    • Version: 5.x – 2.6 Kernel
  • System Tab

    • Graphic card: Default
    • SCSI Controller: VMware PVSCSI
    • BIOS: SeaBIOS (OVMF (UEFI) should work too)
    • Machine: q35
  • Hard Disk Tab

    • Bus/Device: SATA
    • Disk size (GiB): 16
  • CPU Tab

    • Cores: 4 (At least 2, 4 will be better if our physical CPU has enough cores)
    • Type: host (or Default (kvm64))
    • Enable NUMA: Check (if possible)
  • Memory Tab

    • Memory (MiB): 4096 (At least 4096, better if assign more)
    • Ballooning Device: Uncheck
  • Network Tab

    • Model: VMware vmxnet3

References

Nested Virtualization
How to Install/use/test VMware vSphere 7.0 (ESXi 7.0) on Proxmox VE 6.3-3 (PVE)

ESXi with UEFI iSCSI boot on Raspberry Pi

ESXi with UEFI iSCSI boot on Raspberry Pi

Steps

Setup iSCSI disk

  • Create iSCSI Target and LUN in Synology
  • Download RPi4 UEFI Firmware, and unzip it to a SD card which formatted as FAT32 partition
  • Boot from the SD card, and perform following tasks using UEFI menu
    • Disable 3G memory limit
      Device Manager => Raspberry Pi Configuration => Advanced Configuration => Limit RAM to 3 GB)
    • Create device which mapped to iSCSI target
      Device Manager => iSCSI Configuration => Add an Attempt

After Attempt 1 created, Reset (restart) Raspberry Pi. Now, in Boot Manager, should see UEFI SYNOLOGY iSCSI Storage.

Setup boot order

  • Change Boot order and let it before other network boot, otherwise, there will be too much waiting time.

Prepare ESXi installation disk

  • Download and flush VMware-VMvisor-Installer-7.0.0-xxxx.aarch64.iso to USB device

Install ESXi

  • Reset (Reboot) again, and in UEFI menu select boot from USB device
  • Then perform ESXi installation, and select iSCSI disk as target

After installation completed, take out ESXi installation USB, then another reset is required,

Configure ESXi

  • Boot into iSCSI
  • Change ESXi name, etc.

Troubleshooting

Unable to see iSCSI disk in Boot Manager

Most likely is the iSCSI configuration wrong.

  • Check iSCSI Target Name
  • Check iSCSI Target IP
  • Check iSCSI LUN ID (This issue costed me a few hours)
  • Check User/Password

Synchronous Exception

After installation complete, suddenly cannot boot into any destination, and just show error Synchronous Exception.

End up, I have to recopy UEFI image into micro SD card, redo iSCSI configuration. Luckily the iSCSI has no issue, which contains installed ESXi image.

References

Boot ESXi-Arm Fling on a Raspberry Pi 4 Using ISCSI
Raspberry Pi 4 UEFI Firmware Images
ESXi on Arm 10/22 更新
Raspberry Pi 4 Model B 8GBにESXi for ARM 7.0.0をインストール
Synchronous Exception at 0x00000000371013D8 #97

Creating FreeNAS VM on ESXi

Creating FreeNAS VM on ESXi

ZFS advantages

Synology uses btrfs as filesystem, which lack of support for bad sector as it natively not supported by btrfs. ZFS could be the choise because of some advantages below.

Bad sector support

Although Synology also can hand bad sector, but btrfs doesn't. Not sure how Synology handle it, but bad sectors can cause Synology volumes crash, and then volumes will be in read only mode, reconfiguration required and taking time to move volumes out from impacted volumes.

Block level dedup

This is an intersting feature of ZFS. The btrfs dedup can be done by running scripts, but ZFS can do natively.

Decision

Software

FreeNAS/TrueNAS has many feature and got full functions of NAS feature.

Hardware

FreeNAS/TrueNAS doesn't support ARM CPU, and cheap ARM board, such as Raspberry PI 4, doesn't support SATA, and 4 SATA drives should be considered for normal NAS. So not intend to use ARM board for NAS.

Consolution

Decided to try FreeNAS/TrueNAS using an old PC installed ESXi, which has 32GB RAM, 8 theads CPU, and 10GB Ethernet.

  1. Assign 8GB RAM and 2 theads to FreeNAS VM, set hot plug memory and CPU in order to increase memory and CPU dynamically.

    Note: Error occurred when add memory to VM. See post Error on adding hot memory to TrueNAS VM.

  2. Create RDM disk access hard disk directly to improve disk performance.

  3. Create VM network interface which supports 10GB.

  4. Create iSCSI disk to hold VM image, because RDM disk vmdk file can not be created in NFS.

Create iSCSI storage

Although the ESXi is managed by vCenter, but could not find the place to configure iSCSI device. So login to ESXi web interface, and configure iSCSI in Storage -> Adapters.

Note: Target is IQN, not name.

Configure network on ESXi

During the creation, ESXi shows an error that two network interfaces had detected on network used by iSCSI, so I removed second standby interface during iSCSI adapter creation, then put back again after creation completed.

Create RDM disk

Following instructions given by Raw Device Mapping for local storage (1017530), to create RDM disk.

  1. Open an SSH session to the ESXi host.

  2. Run this command to list the disks that are attached to the ESXi host:

    ls -l /vmfs/devices/disks
  3. From the list, identify the local device you want to configure as an RDM and copy the device name.

    Note: The device name is likely be prefixed with t10. and look similar to:

    t10.F405E46494C4540046F455B64787D285941707D203F45765
  4. To configure the device as an RDM and output the RDM pointer file to your chosen destination, run this command:

    vmkfstools -z /vmfs/devices/disks/diskname /vmfs/volumes/datastorename/vmfolder/vmname.vmdk

    For example:

    vmkfstools -z /vmfs/devices/disks/t10.F405E46494C4540046F455B64787D285941707D203F45765 /vmfs/volumes/Datastore2/localrdm1/localrdm1.vmdk

    Note: The size of the newly created RDM pointer file appears to be the same size and the Raw Device it it mapped to, this is a dummy file and is not consuming any storage space.

  5. When you have created the RDM pointer file, attach the RDM to a virtual machine using the vSphere Client:

    • Right click the virtual machine you want to add an RDM disk to.
    • Click Edit Settings.
    • Click Add.
    • Select Hard Disk.
    • Select Use an existing virtual disk.
    • Browse to the directory you saved the RDM pointer to in step 5 and select the RDM pointer file and click Next.
    • Select the virtual SCSI controller you want to attach the disk to and click Next.
    • Click Finish.

You should now see your new hard disk in the virtual machine inventory as Mapped Raw LUN.

Create VM on iSCSI storage

Create VM using following parameters

  • VM type should be FreeBSD 12 (64 bit)
  • Memory recommented 8GB (Configured 4GB at beginning, no issue found)
  • SCSI adapter should be LSI Logic Parallel, if not, harddisk can not be detected.
  • Network adapter should be VMXNET3 to support 10GB
  • Add RDM disk into VM (I have done it after server created)

Configure FreeNAS

Configure network in FreeNAS console, and configure pool, dataset, user, sharing, ACL in FreeNAS website.

Configuration in FreeNAS console

Configure FreeNAS network

Configure IP address

Configuration in FreeNAS website

Using browser to access IP configured in previous step

DNS

Configure DNS

Default route

Default route is added in static route as 0.0.0.0 to gateway.

NTP

Configure timezone

Pool

Configure Pool by giving pool name pool01

Dataset

Under Pool, add Dataset, such as download

User

Create user to be used later in ACL.

Sharing

Create SMB sharing

Configure owner in ACL

Assign owner and group to user created above, and select mode as Restrict.

Delete Pool (if needed)

Select Export/Disconnect in setting of pool.

Result

ESXi hangs

The biggest issue encountered is, ESXi hangs, complains one PCPU freezing. Will try to install usb drive directly to see whether problem only happens on ESXi.

Network speed

Fast as excepted

Compare with Synology DS2419+

They are not comparable, because DS2419+ has more disks in the volume.

  • Slower.
  • Stopped when flushing the disks

Compare with Synology DS1812+

They are not comparable, because DS1812+ has three slow disks with raid in the volume.