Tuesday, October 29, 2013

Solaris 10 (NFS Server/Client) Configuration

NFS Server Side

Solaris-1 - 192.168.100.166 - Server
Solaris-2 - 192.168.100.234 - Client

#svcs -a | grep -i nfs
svc:/network/rpc/bind:default           (required)
svc:/network/nfs/status:default         (required)
svc:/network/nfs/nlockmgr:default       (required)
svc:/network/nfs/server:default         (required)
svc:/network/nfs/mapid:default          (NFSv4, required)

svc:/network/nfs/rquota:default         (optional)
svc:/network/rpc/gss:default            (NFSv4, optional)
/usr/lib/nfs/nfslogd                    (NFSv2, NFSv3, optional)

Check NFS Server with dependency
#svcs -l nfs/server

Check required services status
#svcs -v svc:/network/rpc/bind:default svc:/network/nfs/status:default svc:/network/nfs/nlockmgr:default svc:/network/nfs/server:default svc:/network/nfs/mapid:default

Check optional services status
#svcs -v svc:/network/nfs/rquota:default svc:/network/rpc/gss:default 

If the service is stopped, you can start with
#svcadm enable -r svc:/network/nfs/server:default 

If the service cannot started, you got to verify with
#svcs -xv svc:/network/nfs/server:default 

#rpcinfo -p
    
vi  /etc/ipf/ipf.conf
nfsd
pass in quick proto tcp from any to any port = 2049 keep state
pass in quick proto udp from any to any port = 2049 keep state

sunrpc
pass in quick proto tcp from any to any port = 111 keep state
pass in quick proto udp from any to any port = 111 keep state

lockd
pass in quick proto tcp from any to any port = 4045 keep state
pass in quick proto udp from any to any port = 4045 keep state


#/usr/bin/egrep -v '^$|^#' /etc/dfs/dfstab

#share -F nfs -o nosuid,rw=@192.168.100.0/24,anon=60001 -d "Common Shared directories" /tempnfssun1  (60001=nobody)
#share -F nfs -o rw=solaris-2:@192.168.100.234,root=@192.168.100.234,ro,nosub /tempnfssun1
#share -F nfs -o ro=solaris3,rw=solaris-2,root=solaris-2  /tempnfssun1
#share -F nfs -o ro=solaris-2  /tempnfssun2
#share -F nfs -o ro=client1:client2,rw=client3:client4,root=client4 /tempshare (client=hostname)
#share -F nfs -o ro=@192.168.100.0/24 /usr/share/man

Share command Examples

#share -F nfs -o ro,rw=solaris-2:solaris-3    /shared_nfs_folder
Read-Only access to all
Read-Write access to solaris-2 & solaris-3

share -F nfs -o rw=solaris-2:solaris-3,root=solaris-2     /shared_nfs_folder
Read-Write access to solaris-2 and solaris-3
Root access granted to the root account on solaris-2

share -F nfs -o ro,anon=0  /shared_nfs_folder
anon=0 gives all machines full root access to this share, but with 'ro' it's read-only

share -F nfs -o nosub,ro     /shared_nfs_folder
allow mounting at at top-level directory structure only

share -F nfs -o rw=.mmx.com   /shared_nfs_folder
Sharing with all clients that are part of a DNS mmx.com

share -F nfs -o rw=@192.168.100.0/24  /shared_nfs_folder
Sharing only to 192.168.100.0 subnet

share -F nfs -o rw=solaris-2:-solaris-3:@192.168.100.0/24  /shared_nfs_folder
Sharing read write access to solaris-2
Solaris-3 will deny if the host in 192.168.100.0/24 network

#/usr/sbin/unshare /usr/share/man

On the server, you can use 'unshareall' to stop sharing all exported filesystems and verify with 'dfshares':
#/usr/sbin/unshareall
#/usr/sbin/dfshares
#/usr/sbin/share
#/usr/bin/cat /etc/dfs/sharetab
#shareall -F nfs

NFS Client Side

svc:/network/rpc/bind:default           (required)
svc:/network/nfs/status:default         (required)
svc:/network/nfs/nlockmgr:default       (required)
svc:/network/nfs/client:default         (required)
svc:/network/nfs/cbd:default            (NFSv4, required)
svc:/network/nfs/mapid:default          (NFSv4, required)

Check require services are running
#svcs -v svc:/network/nfs/client:default svc:/network/nfs/status:default svc:/network/nfs/nlockmgr:default svc:/network/rpc/bind:default

Check which share folders are available from NFS Server
#/usr/sbin/dfshares 192.168.100.166
#showmount -e solaris-1

#/usr/bin/mkdir /home2
#/usr/bin/ls -ld /home2        
        drwxr-xr-x   2 root     root           2 Feb 20 03:12 /home2/

#/usr/sbin/mount -F nfs -o rw,bg,intr 192.168.100.166:/export/home /home2
#/usr/bin/ls -ld /home2                           
        drwxr-xr-x   4 root     root         512 Dec 21 02:21 /home2/

#/usr/sbin/df -h /home2
        Filesystem             size   used  avail capacity  Mounted on
        10.0.23.191:/export/home
                               7.9G   4.4G   3.4G    57%    /home2

#/usr/sbin/mount | /usr/bin/grep /home2
        /home2 on 10.0.23.191:/export/home remote/read/write/setuid/devices/rstchown/bg/intr/xattr/dev=8740001 on Sun Feb 20 03:26:37 2011

vi  /usr/bin/cat /etc/vfstab

192.168.100.166:/export/home        -       /home2  nfs     -       yes     rw,bg,intr

Note 4: Because NFSv4 does not use the MOUNT protocol, 'nosub' only
impacts client side mounting using NFSv2 and NFSv3.  Since Solaris 10
attempts use of NFSv4 by default, falling back to v2 or v3 as necessary,
to illustrate 'nosub' I deliberately set option 'vers=3' in the mount command

#mount -F nfs -o rw,intr,vers=3 10.0.23.191:/usr/sfw /opt/sfw
#mount -F nfs -o ro,vers=4 solaris-2:/tempnfssun1 /tempnfssun1 && echo $?

#df -k | grep solaris-1

#mount -o bg,intr,ro  solaris-1:/tempnfssun1    /tempnfssun1

Mount Command options
bg = Retry in background later if mount fails
intr  = Allow keyboard interrupt on hard mount
ro = Do not allow write access to users, regardless of Unix file permissions
hard = keep trying until server responds (default) or the retry value is reached
soft = Give error message of server doesn't respond
retry n = Number of times to retry the mount (default = 10000)
nosuid = Setuid execution not allowed
sec=dh = Secure NFS, requiring the use of passwords based on public key encryption using the
Diffie-Helman encrytion technique.
vers = NFS versions (2,3,4)

To multiple machines Failover Mount

#mount -o ro solaris-1:/tempnfssun1,solaris-3:/tempnfssun1   /tempnfssun1
#mount -o ro solaris-2,solaris-1:/tempnfssun1  /tempnfssun1

#umount  /tempnfssun1
#umount  -f  /tempnfssun1
#umountall -r (To umount all remote filesystems)

Thursday, October 24, 2013

Solaris 10 live upgrade from Oracle Solaris 10 9/10 s10x_u9wos_14a x86 to Oracle Solaris 10 1/13 s10x_u11wos_24a x86

1.If you intend not to register the system

#regadm status
#regadm disable

2. Download and mount the sol-10-u11-ga-x86-dvd.iso

After you inserted the DVD
#mount /dev/dsk/c0t0d0p0  /media

Mount iso file directly
#lofiadm -a /tmp/sol-10-u11-ga-x86-dvd.iso /dev/lofi/1
#mount -F hsfs -o ro /dev/lofi/1 /mnt

3.Remove the package and install the latest package from the dvd image
#pkgrm SUNWlucfg SUNWluu SUNWlur
#cd /media/Solaris_10/Tools/Installers
#./liveupgrade20 -noconsole - nodisplay

4.Check the packages
#pkgchk -v SUNWlucfg SUNWlur SUNWluu

5. Current the system using SCSI drive with root partition - /dev/dsk/c0t1d0s0

6. I gonna add one more SCSI HDD to the solaris box - /dev/dsk/c1t1d0

Note:
/dev/rdsk/c0d0s0 - IDE Drive (/dev/dsk/c1t1d0s0,s1,s3,s4,s5,s6,s7) s2=overlap,s7=/export/home
/dev/rdsk/c0t0d0 - SCSI Drive
/dev/dsk/c0t0d0p0 - DVD Drive


Check current drive status
# ls /dev/rdsk/*s0

#devfsadm

#drvconfig ( configure the /devices directory )

#disks ( creates /dev entries for hard disks attached to the system )

Check again for new HDD
#ls /dev/rdsk/*s0

To format and partition the HDD
# format
Choose new HDD to format

format >  fdisk
No fdisk table exists. The default partition for the disk is:
a 100% “SOLARIS System” partition
Type “y” to accept the default partition, otherwise type “n” to edit the
partition table.
y

format > partition

partition > print (Please Mark the Cylinder number)

partition > 0
Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 0 (0/0/0) 0

Enter partition id tag[unassigned]:
Enter partition permission flags[wm]:
Enter new starting cyl[0]: 1
Enter partition size[0b, 0c, 1e, 0.00mb, 0.00gb]: 1gb - depends on Cylinder Size

partition > print

partition > label
Ready to label disk, continue? y

partition > quit
format > quit


#newfs /dev/rdsk/c1t1d0s0
newfs: construct a new file system /dev/rdsk/c1t1d0s0: (y/n)? y

#fsck /dev/rdsk/c1t1d0s0

7. Setup the partitions on the new disk to be identical to the current system.

#prtvtoc /dev/rdsk/c0t0d0s0 | fmthard -s - /dev/rdsk/c1t1d0s0


Optional: no need to do the automount / we just want to use /dev/dsk/c1t1d0s0
Next, add the proper line to /etc/vfstab:

/dev/dsk/c1t1d0s0 /dev/rdsk/c1t1d0s0 /data ufs 2 yes -

And then mount the partition. In this case, I’m making a /data partition:

# mkdir /data
# mount /data
# df -h /data

8. Creating a BE with name solenv2 and naming the current BE as solenv1, the merged keyword is used to indicate that we are merging it with the parent FS.

#lucreate -c solenv1 -m /:/dev/dsk/c1t1d0s0:ufs -m /var:merged:ufs -n solenv2

example -
#lucreate -c solenv1 -m /:/dev/dsk/c1t1d0s0:ufs -m -:/dev/dsk/c1t1d0s1:swap -n solenv2

/var = separate partition and gonna merge

OR just make

#lucreate -c solenv1 -m /:/dev/dsk/c1t1d0s0:ufs -n solenv2

9. Before you run luupgrade command you need to disable autoreg
echo "autoreg=disable" >  /var/tmp/no-autoreg

10. Run update from the DVD
#luupgrade -u -k /var/tmp/no-autoreg -n solenv2 -s /media/

11. check the Live upgrade status
#lustatus

solenv1   yes       yes          yes           no        -
solenv2   yes       no            no          yes        -

12.Check current boot BE
#lucurr

13.Activate the BE solenv2
#luactivate solenv2

14.Reboot the system with 'init 6'

After reboot the system

15.cat   /etc/release
It must show
Oracle Solaris 10 1/13 s10x_u11wos_24a X86

16.Check and save the patch compatibility with
#showrev -p -  showrev.lst

17#Now you can delete solenv2
#ludelete -F solenv2

Friday, October 11, 2013

ESXi 5.0 to ESXi 5.0 Update 2

1. Download and install ESXi Server
VMware ESXI-5.0.0-469512-STANDARD.x86_64.iso

2. Install Vsphere Client
VMware-viclient-all-5.0.0-455964.exe

3. Go to http://www.vmware.com/patchmgr/download.portal and login with your VMWare account.

4. After you login with VMware-viclient-all-5.0.0-455964.exe you can start patching with following update patch files.

5. Upload all the patch files via VSphere Client - /vmfs/volumes/storage1/ISO

6. Before you patch with new update patch files to ESXi 5.0, you will need to backup the ESXi 5.0 host configuration.

   #vicfg-cfgbackup.pl –server 192.168.1.1 -s my_backup.bak

7. Then put your ESXi 5.0 host to Maintenance Mode via SSH console session.

8. #vim-cmd hostsvc/maintenance_mode_enter

9. #esxcli software vib update -d /vmfs/volumes/storage1/ISO/update-from-esxi5.0-5.0_update01.zip

10. Once you've updated to update01, you can no longer login with previous VSphere Client.
you need to install and login with VMware-viclient-all-5.0.0-623373.exe - VMware vSphere Client v5.0 Update 1

11. #esxcli software vib update -d /vmfs/volumes/storage1/ISO/update-from-esxi5.0-5.0_update02.zip

12. Once you've updated to update01, you can no longer login with previous VSphere Client.
you need to install and login with VMware-viclient-all-5.0.0-913577.exe - VMware VSphere Client v5.0 Update 2

13. esxcli software vib update -d /vmfs/volumes/storage1/ISO/ESXi500-201303001.zip
    esxcli software vib update -d /vmfs/volumes/storage1/ISO/ESXi500-201305001.zip
    esxcli software vib update -d /vmfs/volumes/storage1/ISO/ESXi500-201308001.zip

14. To install the VSphere Power CLI, you will need Microsoft PowerShell, currently I've downloaded PowerShell 3.0
    Windows6.1-KB2506143-x86.msu - 32bit
    Windows6.1-KB2506143-x64.msu - 64bit


15. Then download and install
- VMware-PowerCLI-5.0.0-435426.exe - Power CLI for VSphere 5.0 - (Update01)
- VMware-vSphere-CLI-5.0.0-615831.exe - Power CLI for VSphere 5.0 (Update02)

16. Since vCenter 4.x changed to 64-bit installer, download and install

VCenter Server 5.0 - VMware-VIMSetup-all-5.0.0-434159.iso - VSphere VCenter Server 5.0

    - Processor: Two 64-bit CPUs Intel or AMD x64 2.0GH z or faster
    - Memory: 4GB RAM
      RAM: requirements may be higher if your database runs on the same machine.
     (VMware VirtualCenter Management WebServices requires 128Mb to 1.5GB of memory which is allocated at startup)

    - Disk storage: 5GB (Disk requirements may be higher if your database runs on the same machine)

    - Networking: 1GB recommended (If physical team NICs for redundancy)

    - Database: SQL Express for small deployments (5 hosts/50 VMs) or see below for supported databases.
      Note: If you will be running SQL Server on the same server as vCenter server, that's either express or standard/enterprise, the requirements for the above will be higher.

    - Operating System:
      Windows Server 2008 (64-bit)
      Windows Server 2008 R2

    - Database:
      Microsoft SQL server Database Support:
      Microsoft SQL Server 2005 Express
      Microsoft SQL Server 2005 Standard edition (SP3) 64 bit
      Microsoft SQL Server 2005 Enterprise edition (SP3) 64 bit
      Microsoft SQL Server 2008 Standard Edition 64 bit
      Microsoft SQL Server 2008 Enterprise Edition 64 bit
(Note: Microsoft SQL Server 2005 Express is intended for use with small deployments of up to 5 hosts and/or 50 virtual machines)

      Oracle Database Support:
      Oracle 10g Standard edition (Release 2 [10.2.0.4])
      Oracle 10g Enterprise edition (Release 2 [10.2.0.4])
      Oracle 10g Enterprise edition (Release 2 [10.2.0.4]) 64 bit
      Oracle 11g Standard edition
      Oracle 11g Enterprise edition

VMware-vCenter-Server-Appliance (based on Linux - CentOS)
VMware-vCenter-Server-Appliance-5.0.0.3324-472350_OVF10.ovf
VMware-vCenter-Server-Appliance-5.0.0.3324-472350-system.vmdk
VMware-vCenter-Server-Appliance-5.0.0.3324-472350-data.vmdk

Tuesday, October 8, 2013

Patch VMware Esxi 4.0.0 Standalone (VMKernel Release Build 164009) to VMware Esxi 4.1.0 (VMKernel Release Build 800380 - 1198252) - Update 3

1. Go to http://www.vmware.com/patchmgr/download.portal and login with your VMWare account.

Download following files.
- upgrade-from-ESXi4.0-to-4.1.0-0.0.260247-release.zip
- update-from-esxi4.1-4.1_update01.zip
- update-from-esxi4.1-4.1_update02.zip
- update-from-esxi4.1-4.1_update03.zip
- VMware-viclient-all-4.1.0-799345.exe
- VMware-vSphere-CLI-4.1.0-254719.exe
- ESX410-201307001.zip
- ESX410-201304001.zip
- ESX410-201301001.zip
- ESX410-201211001.zip
Assume that your VMWare ESXi 4.0.0 (VMKernel Release Build 164009) is up and running.

2. Download and install VMware-vSphere-CLI-4.1.0-254719.exe, so that you can run VSphere CLI commands.
3. Patch ESXi from 4.0 to 4.1 Update 3
4. Preparation: Login to your VSphere ESXi Server with vSphere Client

5. halt all the VMs from Vsphere

6. put the ESXi into “maintenance mode”, right click the host.

7. backup the firmware configuration
C:\Program Files\VMware\VMware vSphere CLI\bin>vicfg-cfgbackup.pl –server 192.168.1.1 -s esx-1_20131007.bak
 Enter username: root
 Enter password:
 Saving firmware configuration to esx-1_20131007.bak

8. update from 4.0 to 4.1
 C:\Program Files\VMware\VMware vSphere CLI\bin>vihostupdate.pl –server 192.168.1.1 -i -b D:\vupdate\upgrade-from-ESXi4.0-to-4.1.0-0.0.260247-release.zip
 Enter username: root
 Enter password:
 Please wait patch installation is in progress …
 The update completed successfully, but the system needs to be rebooted for the changes to be effective.

9. Check the patch
 C:\Program Files\VMware\VMware vSphere CLI\bin>vihostupdate.pl –server 192.168.1.1 –query
 Enter username: root
 Enter password:
 ———Bulletin ID——— —–Installed—– —————-Summary——-

ESXi410-GA 2010-10-03T07:10:52 ESXi upgrade Bulletin

ESXi410-GA-esxupdate 2010-10-03T07:10:52 ESXi pre-upgrade Bulletin

10. Use vsphere to reboot 

11. Patching ESXi 4.1 to ESXi 4.1 (update 1,2,3)

put the ESXi into “maintenance mode”, if the host is not in that mode, right click the host.

12. update from 4.1 to 4.1 update 1,2,3
 C:\Program Files\VMware\VMware vSphere CLI\bin>vihostupdate.pl –server 192.168.1.1 -i -b D:\ISO\update-from-esxi4.1-4.1_update01.zip
 C:\Program Files\VMware\VMware vSphere CLI\bin>vihostupdate.pl –server 192.168.1.1 -i -b D:\ISO\update-from-esxi4.1-4.1_update02.zip
 C:\Program Files\VMware\VMware vSphere CLI\bin>vihostupdate.pl –server 192.168.1.1 -i -b D:\ISO\update-from-esxi4.1-4.1_update03.zip
 C:\Program Files\VMware\VMware vSphere CLI\bin>vihostupdate.pl –server 192.168.1.1 -i -b D:\ISO\ESX410-201307001.zip
 C:\Program Files\VMware\VMware vSphere CLI\bin>vihostupdate.pl –server 192.168.1.1 -i -b D:\ISO\ESX410-201304001.zip
 C:\Program Files\VMware\VMware vSphere CLI\bin>vihostupdate.pl –server 192.168.1.1 -i -b D:\ISO\ESX410-201301001.zip
 C:\Program Files\VMware\VMware vSphere CLI\bin>vihostupdate.pl –server 192.168.1.1 -i -b D:\ISO\ESX410-201211001.zip
 Enter username: root
 Enter password:
 Please wait patch installation is in progress
 The update completed successfully, but the system needs to be rebooted for the changes to be effective.

13. use vsphere to reboot

14. After you've updated to ESXi update 3 - You won't be able to login with current VSphere Client, you need to install VMware-viclient-all-4.1.0-799345.exe

vSphere v4.1 Clients

 •VMware vSphere Client v4.1 : VMware-viclient-all-4.1.0-258902.exe
 •VMware vSphere Client v4.1 Update 1 : VMware-viclient-all-4.1.0-345043.exe
 •VMware vSphere Client v4.1 Update 2 : VMware-viclient-all-4.1.0-491557.exe
 •VMware vSphere Client v4.1 Update 3 : VMware-viclient-all-4.1.0-799345.exe