Archive for January, 2011

Setting up COMSTAR iSCSI target on Oracle Solaris 11 Express

Sunday, January 30th, 2011

I found this post on The Grey Blog, which is a good starting point. One thing I noted is that the iSCSI target service does not appear to be loaded by default in Oracle’s Solaris 11 Express. The telltale sign is when you try to issue an itadm command as described below it cannot find the command. So give it a quick
# pkg install network/iscsi/target

Packages to install:     1
Create boot environment:    No
Services to restart:     1
DOWNLOAD                                  PKGS       FILES    XFER (MB)
Completed                                  1/1       14/14      0.2/0.2

PHASE                                        ACTIONS
Install Phase                                  48/48

PHASE                                          ITEMS
Package State Update Phase                       1/1
Image State Update Phase                         2/2

Then start the service:  # svcadm enable -r iscsi/target:default

Then: # svcs \*scsi\*

should give you:

STATE          STIME    FMRI
online         Jan_29   svc:/network/iscsi/initiator:default
online         12:16:26 svc:/network/iscsi/target:default

The post on setting up COMSTAR iSCSI is below.

The Grey Blog: Setting up Solaris COMSTAR and an iSCSI target for a ZFS volume

Setting up Solaris COMSTAR and an iSCSI target for a ZFS volume
COMSTAR stands for Common Multiprotocol SCSI Target: it basically is a framework which can turn a Solaris host into a SCSI target. Before COMSTAR made its appearance, there was a very simple way to share a ZFS file system via iSCSI: just setting the shareiscsi property on the file system was sufficient, such as you do to share it via NFS or CIFS with the sharenfs and sharesmb properties.

COMSTAR brings a more flexible and better solution: it’s not as easy as using those ZFS properties, but it is not that hard, either. Should you need more complex setup and features, COMSTAR includes a wide set of advanced features such as:

Scalability.
Compatibility with generic host adapters.
Multipathing.
LUN masking and mapping functions.

The official COMSTAR documentation is very detailed and it’s the only source of information about COMSTAR I use. If you want to read more about it, please check it out.
Enabling the COMSTAR service
COMSTAR runs as a SMF-managed service and enabling is no different than usual. First of all, check if the service is running:

# svcs \*stmf\*
STATE STIME FMRI
disabled 11:12:50 svc:/system/stmf:default

If the service is disable, enable it:

# svcadm enable svc:/system/stmf:default

After that, check that the service is up and running:

# svcs \*stmf\*
STATE STIME FMRI
online 11:12:50 svc:/system/stmf:default

# stmfadm list-state
Operational Status: online
Config Status : initialized
ALUA Status : disabled
ALUA Node : 0

Creating SCSI Logical Units
You’re not required to master the SCSI protocols to setup COMSTAR but knowing the basics will help you understand the next steps you’ll go through. Oversimplifying, a SCSI target is the endpoint which is waiting client (initiator) connections. For example, a data storage device is a target and your laptop may be an initiator. Each target can provide multiple logical units: each logical unit is the entity that performs “classical” storage operations, such as reading and writing from and to disk.

Each logical unit, then, is backed by some sort of storage device; Solaris and COMSTAR will let you create logical units backed by one of the following storage technologies:

A file.
A thin-provisioned file.
A disk partition.
A ZFS volume.

In this case, we’ll choose the ZFS volume as our favorite backing storage technology.

Why ZFS volumes?
One of the wanders of ZFS is that it isn’t just another filesystem: ZFS combines the volume manager and the file system providing you best of breed services from both world. With ZFS you can create a pool out of your drives and enjoy services such as mirroring and redundancy. In my case, I’ll be using a RAID-Z pool made up of three eSATA drives for this test:

enrico@solaris:~$ zpool status tank-esata
pool: tank-esata
state: ONLINE
scrub: scrub completed after 1h15m with 0 errors on Sun Feb 14 06:15:16 2010
config:

NAME STATE READ WRITE CKSUM
tank-esata ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
c7t0d0 ONLINE 0 0 0
c8t0d0 ONLINE 0 0 0
c8t1d0 ONLINE 0 0 0

errors: No known data errors

Inside pools, you can create file systems or volumes, the latter being the equivalent of a raw drive connected to your machine. File systems and volumes use the storage of the pool without any need for further partitioning or slicing. You can create your file systems almost instantly. No more repartition hell or space estimation errors: file systems and volumes will use the space in the pool, according to the optional policies you might have established (such as quotas, space allocation, etc.)

ZFS, moreover, will let you snapshot (and clone) your file systems on the fly almost instantly: being a Copy-On-Write file system, ZFS will just write modification on the disk, without any overhead and when the blocks are no more referenced, they’ll be automatically freed. ZFS snapshot are Solaris a much optimized version of Apple’s time machine.

Creating a ZFS volume
Creating a volume, provided you’ve already have a ZFS pool, it’s as easy as:

# zfs create -V 250G tank-esata/macbook0-tm

The previous command creates a 250GB volume called macbook0-tm on pool tank-esata. As expected you will find the raw device corresponding to this new volume:

# ls /dev/zvol/rdsk/tank-esata/
[…snip…] macbook0-tm […snip…]

Creating a logical unit
To create a logical unit for our ZFS volume, we can use the following command:

# sbdadm create-lu /dev/zvol/rdsk/tank-esata/macbook0-tm
Created the following LU:

GUID DATA SIZE SOURCE
——————————– ——————- —————-
600144f00800271b51c04b7a6dc70001 268435456000 /dev/zvol/rdsk/tank-esata/macbook0-tm

Logical units are identified by a unique ID, which is the GUID shown in sbdadm output. To verify and get a list of the available logical units we can use the following command:

# sbdadm list-lu
Found 1 LU(s)

GUID DATA SIZE SOURCE
——————————– ——————- —————-
600144f00800271b51c04b7a6dc70001 268435456000 /dev/zvol/rdsk/tank-esata/macbook0-tm

Indeed, it finds the only logical unit we created so far.

Mapping the logical unit
The logical unit we created in the previous section is not available to any initiator yet. To make your logical unit available, you must choose how to map them. Basically, you’ve got two choices:

Mapping it for all initiators on every port.
Mapping it selectively.

In this test, taking into account that it’s a home setup on a private LAN, I’ll go for simple mapping. Please, choose carefully your mapping strategy according to your needs. If you need more information on selective mapping, check the official COMSTAR documentation.

To get the GUID of the logical unit you can use the sbdadm or the stmfadm commands:

# stmfadm list-lu -v
LU Name: 600144F00800271B51C04B7A6DC70001
Operational Status: Offline
Provider Name : sbd
Alias : /dev/zvol/rdsk/tank-esata/macbook0-tm
View Entry Count : 0
Data File : /dev/zvol/rdsk/tank-esata/macbook0-tm
Meta File : not set
Size : 268435456000
Block Size : 512
Management URL : not set
Vendor ID : SUN
Product ID : COMSTAR
Serial Num : not set
Write Protect : Disabled
Writeback Cache : Enabled
Access State : Active

To create the simple mapping for this logical unit, we run the following command:

# stmfadm add-view 600144f00800271b51c04b7a6dc70001

Configuring iSCSI target ports
As outlined in the introduction, with COMSTAR a new iSCSI transport implementation has been introduced that replaces the old implementation. Since the two implementation are incompatible and only one can run at a time, please check which one you’re using. Nevertheless, consider switching to the new implementation as soon as you can.

The old implementation is registered as the SMF service svc:/system/iscsitgt:default and the new implementation is registered as svc:/network/iscsi/target.

enrico@solaris:~$ svcs \*scsi\*
STATE STIME FMRI
disabled Feb_03 svc:/system/iscsitgt:default
online Feb_03 svc:/network/iscsi/initiator:default
online Feb_16 svc:/network/iscsi/target:default

If you’re running the new COMSTAR iSCSI transport implementation, you can now create a target with the following command:

# itadm create-target
Target iqn.1986-03.com.sun:02:7674e54f-6738-4c55-d57d-87a165eda163 successfully created

If you want to check and list the targets you can use the following command:

# itadm list-target
TARGET NAME STATE SESSIONS
iqn.1986-03.com.sun:02:7674e54f-6738-4c55-d57d-87a165eda163 online 0

Configuring the iSCSI target for discovery
The last thing left to do to have your iSCSI target configured for discovery. Discovery is the process which an initiator use to get a list of available targets. You can opt for one of the three iSCSI discovery methods:

Static discovery: a static target address is configured.
Dynamic discovery: targets are discovered by initiators using an intermediary iSNS servers.
SendTargets discovery: configuring the SendTargets option on the initiator.

I will opt for static discovery because I’ve got a very small number of targets and I want to control which initiators connect to my target. To configure static discovery just run the following command:

# devfsadm -i iscsi

Next steps
Configuring a target is a matter of few commands. It took me much more time to write down this blog post than having my COMSTAR target running.

The next steps wil be having an initiator connect to your target. I detailed how to configure a Mac OS/X instance as an iSCSI initiator on another post.

Lifted from Genuix.org site. Settings for Solaris CIFS shares etc…

Sunday, January 23rd, 2011

I’m only copying this here for now since much of the OpenSolaris documentation I’ve relied on over the years has become unfindable. All the Sun doc links in Google now point to a single Oracle Sun page that seems to get me nowhere… :(

Getting Started With the Solaris CIFS Service – Genunix

How to Join a Workgroup

Start the CIFS Service.

# svcadm enable -r smb/server

If the following warning is issued, you can ignore it:
svcadm: svc:/milestone/network depends on svc:/network/physical, which has multiple instances

Join the workgroup.

# smbadm join -w workgroup-name

The default workgroup name is WORKGROUP. If you want to use the default, skip this step.

Establish passwords for CIFS workgroup users.

CIFS does not support UNIX or NIS style passwords. The SMB PAM module is required to generate CIFS style passwords. When the SMB PAM module is installed, the passwd command generates additional encrypted versions of each password that are suitable for use with CIFS.

Install the PAM module.

Add the following line to the end of the /etc/pam.conf file to support creation of an encrypted version of the user’s password for CIFS.

other password required pam_smb_passwd.so.1 nowarn

Note – After the PAM module is installed, the passwd command automatically generates CIFS-suitable passwords for new users. You must also run the passwd command to generate CIFS-style passwords for existing users.

Only a privileged user can modify the pam.conf file, for example:
# pfexec gedit /etc/pam.conf

Create local user passwords.
# passwd username

(Optional) Verify your Solaris CIFS service configuration.

Download the cifs-chkcfg script.

Run the cifs-chkcfg script.

# cifs-chkcfg

Note – The cifs-chkcfg script does not currently verify the Kerberos configuration.

How to Join an AD Domain
Before You Begin

This task describes how to join an AD domain and pertains to at least SXCE Build 82.

Determine your name mapping strategy and, if appropriate, create Solaris-to-Windows mapping rules. See “Creating Your Identity Mapping Strategy” in the Solaris CIFS Administration Guide.

Creating name-based mapping rules is optional and can be performed at any time. By default, identity mapping uses ephemeral mapping instead of name-based mapping.

Start the CIFS Service.
# svcadm enable -r smb/server

Ensure that system clocks on the domain controller and the Solaris system are synchronized.

For more information, see Step 3 of “How to Configure the Solaris CIFS Service in Domain Mode” in the Solaris CIFS Administration Guide.

Join the domain.

# smbadm join -u domain-user domain-name

You must specify a user that has appropriate access rights to perform this step.

Restart the CIFS Service.
# svcadm restart smb/server

(Optional) Verify your Solaris CIFS service configuration.

Download the cifs-chkcfg script.

Run the cifs-chkcfg script.

# cifs-chkcfg

Note – The cifs-chkcfg script does not currently verify the Kerberos configuration.

How to Create a CIFS Share

Enable SMB sharing for the ZFS file system.

Enable SMB sharing for an existing ZFS file system.

# zfs set sharesmb=on fsname

For example, to enable SMB sharing for the ztank/myfs file system, type:

# zfs set sharesmb=on ztank/myfs

Note – The resource name for the share is automatically constructed by the zfs command when the share is created. The resource name is based on the dataset name, unless you specify a resource name. Any characters that are illegal for resource names are replaced by an underscore character (_).

To specify a resource name for the share, specify a name for the sharesmb property, sharesmb=name=resource-name.

For example, to specify a resource name of myfs for the ztank/myfs file system, type:
# zfs set sharesmb=name=myfs ztank/myfs

Create a new ZFS file system that enables SMB sharing.

When creating a ZFS file system to be used for SMB file sharing, set the casesensitivity option to mixed to permit a combination of case-sensitive and case-insensitive matching. Also, set the nbmand option to enforce mandatory cross-protocol share reservations and byte-range locking.

# zfs create -o casesensitivity=mixed -o nbmand=on -o sharesmb=on fsname

For example, to create a ZFS file system with SMB sharing and nbmand enabled for the ztank/yourfs file system, type:

# zfs create -o casesensitivity=mixed -o nbmand=on -o sharesmb=on ztank/yourfs

To specify a resource name for the share, specify a name for the sharesmb property, sharesmb=name=resource-name.

For example, to specify a resource name of yourfs for the ztank/yourfs file system, type:
# zfs create -o casesensitivity=mixed -o nbmand=on -o sharesmb=name=yourfs ztank/yourfs

Verify how the new file system is shared.

# sharemgr show -vp

Now, you can access the share by connecting to \\solaris-hostname\share-name. For information about how to access CIFS shares from your client, refer to the client documentation.

Quick notes on expanding a ZFS RaidZ Pool – Solaris 11 Express. Howto (see bottom for update)

Sunday, January 16th, 2011

So you have what was once a gargantuan ZFS RaidZ1 array, but the family videos, pictures, plus the super cool time windowed (via snapshot) backup method you have created for all your local machines have stuffed up the pool completely. Like me you view just dumping another pair of mirrored drives into the pool to be a hokey kluge that will create dissimilar infrastructure you will have to remember for years (in the event of a failure). Like me you have also heard that you can replace your drives one at a time with larger drives and with the successful replacement of the last drive the array will magically expand in size.

The long/short of my migration:

Whenever you turn your system on ZFS will automatically find your array drives wherever they are and form the array on boot-up. For my migration I bought an external eSata dock (one of the ones where you pop the drive in the top).

For each drive replacement I followed the following procedure.

1. Pop open a shell, become root. (I modded my permissions so pfexec works for me. I show how to do this in another post here on the blog. You can SU if you like)  $pfexec bash will give you a root shell. Get a status of my pool and make note of the device names (shown in bold).

#zpool status

NAME        STATE     READ WRITE CKSUM
mypool    ONLINE       0     0     0
raidz1-0  ONLINE       0     0     0
c9t4d0
ONLINE       0     0     0
c9t3d0
ONLINE       0     0     0
c9t2d0
ONLINE       0     0     0
c9t5d0
ONLINE       0     0     0

2. Shut down the machine.

3. Remove the drive I plan to replace from it’s current location (bay, sata, power, et al)

4. Place that drive into the eSata dock

5. Put the new larger drive in the place of the old drive.

6. ZFS worked out where the old drive was on boot up.

7. Become root, look at the devices in the system with the format command (note the ctrl-d will get you out of the format command). As you see one of the devices that was in my zpool before I swapped drives is now one of the new 2tb drives I’m putting into the pool. From running the format command before I put a drive into the eSata dock I know that any drive in the dock will be c7t513d0, but you could have run before and after format commands to look for the changes. Do be careful and make sure you know where your old and new drives are before the next step though…

#format

Searching for disks…done

AVAILABLE DISK SELECTIONS:
0. c7t512d0 <ATA    -WDC WD2500AAKS–0953 cyl 30398 alt 2 hd 255 sec 63>
/pci@0,0/pci8086,3a42@1c,1/pci1458,b000@0/disk@200,0
1. c7t513d0 <ATA-SAMSUNG HD103UI-0953-931.51GB>
/pci@0,0/pci8086,3a42@1c,1/pci1458,b000@0/disk@201,0
2. c9t0d0 <ATA-WDC WD6401AALS-0-3B01-596.17GB>
/pci@0,0/pci1458,b005@1f,2/disk@0,0
3. c9t1d0 <ATA-WDC WD6401AALS-0-3B01-596.17GB>
/pci@0,0/pci1458,b005@1f,2/disk@1,0
4. c9t2d0 <ATA-WDC WD20EARS-00M-AB51-1.82TB>
/pci@0,0/pci1458,b005@1f,2/disk@2,0
5. c9t3d0 <ATA-WDC WD20EARS-00M-AB51-1.82TB>
/pci@0,0/pci1458,b005@1f,2/disk@3,0
6. c9t4d0 <ATA    -WDC WD20EARS-00-AB51 cyl 60798 alt 2 hd 255 sec 252>
/pci@0,0/pci1458,b005@1f,2/disk@4,0
7. c9t5d0 <ATA-WDC WD20EARS-00M-AB51-1.82TB>
/pci@0,0/pci1458,b005@1f,2/disk@5,0
Specify disk (enter its number):
^D

8. This was an interesting little annoyance. It seems that the zpool replace command would only work after a zpool status command was run. Running the replace without running the status first gives you the following.

#zpool replace mypool c7t513d0 c9t4d0
cannot replace c7t513d0 with c9t4d0: no such device in pool

So we know we need to run a status first then follow it with the replace command…

#zpool status mypool

pool: mypool
state: ONLINE
scan: scrub canceled on Sat Jan 15 20:56:30 2011
config:

NAME          STATE     READ WRITE CKSUM
mypool      ONLINE       0     0     0
raidz1-0    ONLINE       0     0     0
c7t513d0  ONLINE       0     0     0
c9t3d0    ONLINE       0     0     0
c9t2d0    ONLINE       0     0     0
c9t5d0    ONLINE       0     0     0

errors: No known data errors

#zpool replace mypool c7t513d0 c9t4d0

9. Run another status so you know what is going on

#zpool status mypool

pool: mypool
state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Sat Jan 15 21:19:26 2011
64.4M scanned out of 3.07T at 8.05M/s, 111h4m to go
15.5M resilvered, 0.00% done
config:

NAME             STATE     READ WRITE CKSUM
mypool         ONLINE       0     0     0
raidz1-0       ONLINE       0     0     0
replacing-0  ONLINE       0     0     0me
c7t513d0   ONLINE       0     0     0
c9t4d0     ONLINE       0     0     0  (resilvering)
c9t3d0       ONLINE       0     0     0
c9t2d0       ONLINE       0     0     0
c9t5d0       ONLINE       0     0     0

errors: No known data errors

10. When the process is complete I believe it is advisable to scrub the drives to ensure all is well. #zpool scrub mypool This will also take a while and you can check on the status of the scrub with #zpool status mypool

Notes:

  • When replacing a drive Zpool status will show long estimated times as the 111 hours in red above. The numbers kept increasing for at least 2 hours and actually made it up to 423 hours remaining, but after 2 to 3 hours data actually started moving and the estimates became much more realistic. This was true for each drive I replaced. I can tell you than to complete a 4 drive RaidZ1 array ~85% full took about 12 hours per drive.
  • One crazy note… My server shut down current connections and failed to open the console on the machine during the copy. It started to fail all connection attempts with out of memory errors… Not good! Maybe I should not have been running virtual machines while it was resilvering on another pool… Dunno, but it was definitely strange. The resilver succeeded, and the machine did let me in after a couple of hours. I did realize that after installing Oracle Solaris 11 Express I had forgotten to limit ZFS ARC Cache (which I had done before: good reference here ZFS_Evil_Tuning_Guide#Limiting_the_ARC_Cache. So before the last drive swap I set the ZFS ARC Cache limit to 7 gigs of memory via the following “set zfs:zfs_arc_max = 0x1C0000000”.

Warning:

  • Remember that in a RaidZ1 array any loss of 2 drives at one time will loose you the entire array! I know I’m paranoid, but I have lost raid 5 arrays this way in the past, so imagine the following: You are upgrading a multi-drive RaidZ1 array. If you did not precondition the drives (have them powered up and drive testing over a few days – most home users do not do this), you will have more than one drive in the array that has been spinning for less than 24 hours. My experience with drive failures is as follows.
    • If a drive does not make it past power on you are OK, you stop migration and get a different drive… no problem.
    • Next hurdle, the drives that fail within 48 hours, should still be a low percentage, but there will be some.
    • The final more insidious are the drives that go flaky and start loosing sectors, then fail. This usually takes a few weeks.

Since most of the failures are when drives are relatively new, the odds of having two new drives in an array fail at the same time are far greater than the odds of having two simultaneous failures in a seasoned array.So the average home user will probably get a rack of 4 new huge hard drives on their front porch and run to the server and start swapping out their array. Having all brand new drives in the array, the odds that two will fail in the next week are FAR greater than the odds that you will experience two simultaneous failures after the system has been spinning for a week and even less after a month.

  • Some strategies to consider as you expand your home ZFS RaidZ1 array:
  • Expand safely. Replace one drive a week with the newer drive, or alternately season drives in another system for a week before you start putting them into your production array.
  • As long as you have not replaced the last larger drive, each drive is still held to the size dictated by your original array. You can avoid having to have spares of the new larger drive size by keeping your old drives and swapping them back in the event of a failure (until the last drive is replaced and ZFS starts using the full size of the drives).
  • I VERY highly advise weekly scrubbing of the home array. Monitoring the ‘zpool status’ after scrubs is the easiest way I know of to identify a flaky drive that is loosing sectors. An easy way do a weekly scrub is by adding a shell script to your crontab as follows:
    • I have the following line for each of my pools in a shell file I call zfsmaint.sh “# zpool scrub <my zfs pool name>
    • ” You can add this to your crontab by:   “#crontab -e”
      and adding the following (of course replacing the <your home dir> and the zpool command above should be in a shell script called zfsmaint.sh in your homedir):
      0 23 * * 1 /export/home/<your home dir>/zfsmaint.sh
      If you are having problems with the VI editor, please go look up VI commands on the web.

General:

Excellent ZFS Reference: ZFS_Best_Practices_Guide

Future wishes…

When I started my ZFS array there was no RaidZ2 or RaidZ3 (double/triple redundancy), but now there is… Sun never built an upgrade path I really hope Oracle will see this as an issue and make an upgrade path available. At the trivial cost of another disk I would like to move to a 2 drive redundant array without having to build a whole extra array to move it through.

UPDATE:

I wanted to make this walk through for everyone out there as a compilation of all the individual blogs/guides I had to use to perform the task. After all was said and done. It did not work. Apparently Oracle broke the auto-expand in Solaris 11. I went through the steps of setting the pool auto-expand property and trying to force the pool to expand with the new ‘zpool online -e’ command. Nothing worked. So I ended up copying my data to another pool, creating a new RaidZ2 pool (that I wanted anyway) and copying the data back. This was done via the zfs send/recv function via SSH to another server. After playing around the command line to do this is as follows:

Create a snapshot in your local machine via

zfs snapshot <mypool>/<filesystem>@<shapshot>

so

# zfs shapshot tank/myshare@today

Then my destination backup server was at 192.168.1.67. I created a pool in it called tank2. zfs automatically copied the snapshot and created a myshare filesystem and snapshot in tank2.

zfs send <source_pool>/<source_filesystem>@<snapshot> | ssh <account>@<server_ip> pfexec /sbin/zfs recv <dest_pool>/<dest_filesystem>@<dest_snapshot>

or

# zfs send tank/myshare@today | ssh myaccount@192.168.1.67 pfexec /sbin/zfs recv tank2/myshare@today

After I copied all the filesystems to the new server I did a scrub on the new server to ensure the drives/data were good. Then destroyed the pool on the original server. Created the new pool (which now filled up the drives) and copied everything back. From the console on the new server and the IP address of the oldserver is 192.168.1.68

# zfs send tank2/myshare@today | ssh myaccount@192.168.1.67 pfexec /sbin/zfs recv tank/myshare@today

When it is all moved, scrub the pool and bob’s your uncle… :)

Upgrading Opensolaris snv_130 to Oracle Solaris Express snv_151a

Sunday, January 2nd, 2011

Modified for Solaris 11

Upgrade to snv_151A from snv_130.

Ok, so my path to upgrade to the new Oracle Solaris Express was blocked as I could not get my snv_130 to upgrade to snv_134 so I could perform the upgrade. I was left having to perform a fresh install and re-import of my zpools. This is a quick overview of what I did. I’m not a Solaris guru by any means, and the walk through below is a bit spartan, but I thought I would get it out there to see if it would be of help to anyone else who had setup OpenSolaris as a sweet ZFS File/Print/Virtualization server. Let me know if you have any questions…

1. Export all but boot Zpools on old machine

#zpool export -f <pool name>

2. make sure to copy or move current shell scripts from <user home> dirs. Make sure you copy the current crontab, to be safe copy group and passwd files from the etc dir. If you run VirtualBox you want to be very sure you copy the .VirtualBox directory to a place where you will be able to get to it after the install.

3. Install Opensolaris on new HD

4. to get stuff to work had to get around GUI root expired passwd bug in Solaris 11 by popping open a cli and resetting root passwd with ‘#passwd root’

4. edit pam.conf – I’m not sure if this is still needed, but It used to be required under OpenSolaris and it didn’t hurt. :)

#sudo gedit /etc/pam.conf

add the following line to the end of the pam.conf

other password required pam_smb_passwd.so.1 nowarn

5. Fix pfexec as I used it everywhere… http://blogs.sun.com/observatory/entry/sudo

To do this I had to $ sudo usermod -P “Primary Administrator” <username>

(or you can do it in the gui).

6. check the status of the cifs server

# svcs smb/server

7. turn the cifs server on

# svcadm enable -r smb/server

I get an error saying that the “svc:/milestone/network depends on svc:/network/physical, which has multiple instances.”. No worries for now though. Checking the service (step 6, says it is running).

8. Join the workgroup

#smbadm join -w <workgroup>

9. Add all the original users to the system, add all the same groups as used before, reset all passwords with passwd command. (I use ACL access controls so I needed the same user/group structure. You could also specify the userid’s as the old pool will come back with userID’s instead of user names, but after you go and touch all the ACL’s again it will straighten it all out.

9. Import the zpool

#zpool import <pool name>

10. Check the shares

# sharemgr show -vp

Reset the permissions on all the pool drives, give the system a reboot and you are good to go

Now on to VirtualBox… Download VirtualBox and install. Since I keep all my machines on a zpool all I had to do was copy the .VirtualBox directory from the home dir of the user who it was installed under last time. This was done before I reinstalled as noted above. So after everything above was done I copied the .VirtualBox directory into my user’s home dir then installed VirtualBox. The xml files in the directory held the pointers to the machines and hard drive files on the zpool so everything installed and ran out of the box.