Author: EvilT

Virtualbox Ubuntu Linux guest shares via Guest Additions

Ok, I have Linux VM’s running on a Solaris host. One thing I can never find anywhere is instructions on how to mount the shares. It is actually very easy to do.

1. Install the guest additions in the guest.

2. Select the directories you want to share to the guest via the VirtualBox control panel (Settings->Shared Folders).

3. And now the missing piece. On the linux system make your user account part of the vboxsf group, any group that should be able to get to these shares

 

The shares will appear in the /media folder on my Ubuntu guest prefaced with a “sf_”. So if I share the folder to the guest as Videos, I would find a /media/sf_Videos folder. If you do not add the account in question to the vboxsf group then you will see no files in the folder.

I first ran into this setting up a Plex media server in a guest. Plex could not see any files in the folder, but when you add the plex account to the /etc/group file(example below) it works like a champ.

vboxsf:x:1001:plex

Solaris Virtualbox host fails install of extension pack

This problem started after I upgraded to the latest Oracle Solaris 11 and tried to install the VirtualBox 4.1.6 update.  When I would go to add the new Oracle_VM_VirtualBox_Extension_Pack I would enter a username and password and the install fails.

The current workaround is to use VBoxManage from the command line to install the extension pack.

VBoxManage extpack install <extension file>


Setting up COMSTAR iSCSI target on Oracle Solaris 11 Express

I found this post on The Grey Blog, which is a good starting point. One thing I noted is that the iSCSI target service does not appear to be loaded by default in Oracle’s Solaris 11 Express. The telltale sign is when you try to issue an itadm command as described below it cannot find the command. So give it a quick
# pkg install network/iscsi/target

Packages to install:     1
Create boot environment:    No
Services to restart:     1
DOWNLOAD                                  PKGS       FILES    XFER (MB)
Completed                                  1/1       14/14      0.2/0.2

PHASE                                        ACTIONS
Install Phase                                  48/48

PHASE                                          ITEMS
Package State Update Phase                       1/1
Image State Update Phase                         2/2

Then start the service:  # svcadm enable -r iscsi/target:default

Then: # svcs \*scsi\*

should give you:

STATE          STIME    FMRI
online         Jan_29   svc:/network/iscsi/initiator:default
online         12:16:26 svc:/network/iscsi/target:default

The post on setting up COMSTAR iSCSI is below.

The Grey Blog: Setting up Solaris COMSTAR and an iSCSI target for a ZFS volume

Setting up Solaris COMSTAR and an iSCSI target for a ZFS volume
COMSTAR stands for Common Multiprotocol SCSI Target: it basically is a framework which can turn a Solaris host into a SCSI target. Before COMSTAR made its appearance, there was a very simple way to share a ZFS file system via iSCSI: just setting the shareiscsi property on the file system was sufficient, such as you do to share it via NFS or CIFS with the sharenfs and sharesmb properties.

COMSTAR brings a more flexible and better solution: it’s not as easy as using those ZFS properties, but it is not that hard, either. Should you need more complex setup and features, COMSTAR includes a wide set of advanced features such as:

Scalability.
Compatibility with generic host adapters.
Multipathing.
LUN masking and mapping functions.

The official COMSTAR documentation is very detailed and it’s the only source of information about COMSTAR I use. If you want to read more about it, please check it out.
Enabling the COMSTAR service
COMSTAR runs as a SMF-managed service and enabling is no different than usual. First of all, check if the service is running:

# svcs \*stmf\*
STATE STIME FMRI
disabled 11:12:50 svc:/system/stmf:default

If the service is disable, enable it:

# svcadm enable svc:/system/stmf:default

After that, check that the service is up and running:

# svcs \*stmf\*
STATE STIME FMRI
online 11:12:50 svc:/system/stmf:default

# stmfadm list-state
Operational Status: online
Config Status : initialized
ALUA Status : disabled
ALUA Node : 0

Creating SCSI Logical Units
You’re not required to master the SCSI protocols to setup COMSTAR but knowing the basics will help you understand the next steps you’ll go through. Oversimplifying, a SCSI target is the endpoint which is waiting client (initiator) connections. For example, a data storage device is a target and your laptop may be an initiator. Each target can provide multiple logical units: each logical unit is the entity that performs “classical” storage operations, such as reading and writing from and to disk.

Each logical unit, then, is backed by some sort of storage device; Solaris and COMSTAR will let you create logical units backed by one of the following storage technologies:

A file.
A thin-provisioned file.
A disk partition.
A ZFS volume.

In this case, we’ll choose the ZFS volume as our favorite backing storage technology.

Why ZFS volumes?
One of the wanders of ZFS is that it isn’t just another filesystem: ZFS combines the volume manager and the file system providing you best of breed services from both world. With ZFS you can create a pool out of your drives and enjoy services such as mirroring and redundancy. In my case, I’ll be using a RAID-Z pool made up of three eSATA drives for this test:

enrico@solaris:~$ zpool status tank-esata
pool: tank-esata
state: ONLINE
scrub: scrub completed after 1h15m with 0 errors on Sun Feb 14 06:15:16 2010
config:

NAME STATE READ WRITE CKSUM
tank-esata ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
c7t0d0 ONLINE 0 0 0
c8t0d0 ONLINE 0 0 0
c8t1d0 ONLINE 0 0 0

errors: No known data errors

Inside pools, you can create file systems or volumes, the latter being the equivalent of a raw drive connected to your machine. File systems and volumes use the storage of the pool without any need for further partitioning or slicing. You can create your file systems almost instantly. No more repartition hell or space estimation errors: file systems and volumes will use the space in the pool, according to the optional policies you might have established (such as quotas, space allocation, etc.)

ZFS, moreover, will let you snapshot (and clone) your file systems on the fly almost instantly: being a Copy-On-Write file system, ZFS will just write modification on the disk, without any overhead and when the blocks are no more referenced, they’ll be automatically freed. ZFS snapshot are Solaris a much optimized version of Apple’s time machine.

Creating a ZFS volume
Creating a volume, provided you’ve already have a ZFS pool, it’s as easy as:

# zfs create -V 250G tank-esata/macbook0-tm

The previous command creates a 250GB volume called macbook0-tm on pool tank-esata. As expected you will find the raw device corresponding to this new volume:

# ls /dev/zvol/rdsk/tank-esata/
[…snip…] macbook0-tm […snip…]

Creating a logical unit
To create a logical unit for our ZFS volume, we can use the following command:

# sbdadm create-lu /dev/zvol/rdsk/tank-esata/macbook0-tm
Created the following LU:

GUID DATA SIZE SOURCE
——————————– ——————- —————-
600144f00800271b51c04b7a6dc70001 268435456000 /dev/zvol/rdsk/tank-esata/macbook0-tm

Logical units are identified by a unique ID, which is the GUID shown in sbdadm output. To verify and get a list of the available logical units we can use the following command:

# sbdadm list-lu
Found 1 LU(s)

GUID DATA SIZE SOURCE
——————————– ——————- —————-
600144f00800271b51c04b7a6dc70001 268435456000 /dev/zvol/rdsk/tank-esata/macbook0-tm

Indeed, it finds the only logical unit we created so far.

Mapping the logical unit
The logical unit we created in the previous section is not available to any initiator yet. To make your logical unit available, you must choose how to map them. Basically, you’ve got two choices:

Mapping it for all initiators on every port.
Mapping it selectively.

In this test, taking into account that it’s a home setup on a private LAN, I’ll go for simple mapping. Please, choose carefully your mapping strategy according to your needs. If you need more information on selective mapping, check the official COMSTAR documentation.

To get the GUID of the logical unit you can use the sbdadm or the stmfadm commands:

# stmfadm list-lu -v
LU Name: 600144F00800271B51C04B7A6DC70001
Operational Status: Offline
Provider Name : sbd
Alias : /dev/zvol/rdsk/tank-esata/macbook0-tm
View Entry Count : 0
Data File : /dev/zvol/rdsk/tank-esata/macbook0-tm
Meta File : not set
Size : 268435456000
Block Size : 512
Management URL : not set
Vendor ID : SUN
Product ID : COMSTAR
Serial Num : not set
Write Protect : Disabled
Writeback Cache : Enabled
Access State : Active

To create the simple mapping for this logical unit, we run the following command:

# stmfadm add-view 600144f00800271b51c04b7a6dc70001

Configuring iSCSI target ports
As outlined in the introduction, with COMSTAR a new iSCSI transport implementation has been introduced that replaces the old implementation. Since the two implementation are incompatible and only one can run at a time, please check which one you’re using. Nevertheless, consider switching to the new implementation as soon as you can.

The old implementation is registered as the SMF service svc:/system/iscsitgt:default and the new implementation is registered as svc:/network/iscsi/target.

enrico@solaris:~$ svcs \*scsi\*
STATE STIME FMRI
disabled Feb_03 svc:/system/iscsitgt:default
online Feb_03 svc:/network/iscsi/initiator:default
online Feb_16 svc:/network/iscsi/target:default

If you’re running the new COMSTAR iSCSI transport implementation, you can now create a target with the following command:

# itadm create-target
Target iqn.1986-03.com.sun:02:7674e54f-6738-4c55-d57d-87a165eda163 successfully created

If you want to check and list the targets you can use the following command:

# itadm list-target
TARGET NAME STATE SESSIONS
iqn.1986-03.com.sun:02:7674e54f-6738-4c55-d57d-87a165eda163 online 0

Configuring the iSCSI target for discovery
The last thing left to do to have your iSCSI target configured for discovery. Discovery is the process which an initiator use to get a list of available targets. You can opt for one of the three iSCSI discovery methods:

Static discovery: a static target address is configured.
Dynamic discovery: targets are discovered by initiators using an intermediary iSNS servers.
SendTargets discovery: configuring the SendTargets option on the initiator.

I will opt for static discovery because I’ve got a very small number of targets and I want to control which initiators connect to my target. To configure static discovery just run the following command:

# devfsadm -i iscsi

Next steps
Configuring a target is a matter of few commands. It took me much more time to write down this blog post than having my COMSTAR target running.

The next steps wil be having an initiator connect to your target. I detailed how to configure a Mac OS/X instance as an iSCSI initiator on another post.

Lifted from Genuix.org site. Settings for Solaris CIFS shares etc…

I’m only copying this here for now since much of the OpenSolaris documentation I’ve relied on over the years has become unfindable. All the Sun doc links in Google now point to a single Oracle Sun page that seems to get me nowhere… ūüôĀ

Getting Started With the Solaris CIFS Service – Genunix

How to Join a Workgroup

Start the CIFS Service.

# svcadm enable -r smb/server

If the following warning is issued, you can ignore it:
svcadm: svc:/milestone/network depends on svc:/network/physical, which has multiple instances

Join the workgroup.

# smbadm join -w workgroup-name

The default workgroup name is WORKGROUP. If you want to use the default, skip this step.

Establish passwords for CIFS workgroup users.

CIFS does not support UNIX or NIS style passwords. The SMB PAM module is required to generate CIFS style passwords. When the SMB PAM module is installed, the passwd command generates additional encrypted versions of each password that are suitable for use with CIFS.

Install the PAM module.

Add the following line to the end of the /etc/pam.conf file to support creation of an encrypted version of the user’s password for CIFS.

other password required pam_smb_passwd.so.1 nowarn

Note – After the PAM module is installed, the passwd command automatically generates CIFS-suitable passwords for new users. You must also run the passwd command to generate CIFS-style passwords for existing users.

Only a privileged user can modify the pam.conf file, for example:
# pfexec gedit /etc/pam.conf

Create local user passwords.
# passwd username

(Optional) Verify your Solaris CIFS service configuration.

Download the cifs-chkcfg script.

Run the cifs-chkcfg script.

# cifs-chkcfg

Note – The cifs-chkcfg script does not currently verify the Kerberos configuration.

How to Join an AD Domain
Before You Begin

This task describes how to join an AD domain and pertains to at least SXCE Build 82.

Determine your name mapping strategy and, if appropriate, create Solaris-to-Windows mapping rules. See “Creating Your Identity Mapping Strategy” in the Solaris CIFS Administration Guide.

Creating name-based mapping rules is optional and can be performed at any time. By default, identity mapping uses ephemeral mapping instead of name-based mapping.

Start the CIFS Service.
# svcadm enable -r smb/server

Ensure that system clocks on the domain controller and the Solaris system are synchronized.

For more information, see Step 3 of “How to Configure the Solaris CIFS Service in Domain Mode” in the Solaris CIFS Administration Guide.

Join the domain.

# smbadm join -u domain-user domain-name

You must specify a user that has appropriate access rights to perform this step.

Restart the CIFS Service.
# svcadm restart smb/server

(Optional) Verify your Solaris CIFS service configuration.

Download the cifs-chkcfg script.

Run the cifs-chkcfg script.

# cifs-chkcfg

Note – The cifs-chkcfg script does not currently verify the Kerberos configuration.

How to Create a CIFS Share

Enable SMB sharing for the ZFS file system.

Enable SMB sharing for an existing ZFS file system.

# zfs set sharesmb=on fsname

For example, to enable SMB sharing for the ztank/myfs file system, type:

# zfs set sharesmb=on ztank/myfs

Note – The resource name for the share is automatically constructed by the zfs command when the share is created. The resource name is based on the dataset name, unless you specify a resource name. Any characters that are illegal for resource names are replaced by an underscore character (_).

To specify a resource name for the share, specify a name for the sharesmb property, sharesmb=name=resource-name.

For example, to specify a resource name of myfs for the ztank/myfs file system, type:
# zfs set sharesmb=name=myfs ztank/myfs

Create a new ZFS file system that enables SMB sharing.

When creating a ZFS file system to be used for SMB file sharing, set the casesensitivity option to mixed to permit a combination of case-sensitive and case-insensitive matching. Also, set the nbmand option to enforce mandatory cross-protocol share reservations and byte-range locking.

# zfs create -o casesensitivity=mixed -o nbmand=on -o sharesmb=on fsname

For example, to create a ZFS file system with SMB sharing and nbmand enabled for the ztank/yourfs file system, type:

# zfs create -o casesensitivity=mixed -o nbmand=on -o sharesmb=on ztank/yourfs

To specify a resource name for the share, specify a name for the sharesmb property, sharesmb=name=resource-name.

For example, to specify a resource name of yourfs for the ztank/yourfs file system, type:
# zfs create -o casesensitivity=mixed -o nbmand=on -o sharesmb=name=yourfs ztank/yourfs

Verify how the new file system is shared.

# sharemgr show -vp

Now, you can access the share by connecting to \\solaris-hostname\share-name. For information about how to access CIFS shares from your client, refer to the client documentation.

Quick notes on expanding a ZFS RaidZ Pool – Solaris 11 Express. Howto (see bottom for update)

So you have what was once a gargantuan ZFS RaidZ1 array, but the family videos, pictures, plus the super cool time windowed (via snapshot) backup method you have created for all your local machines have stuffed up the pool completely. Like me you view just dumping another pair of mirrored drives into the pool to be a hokey kluge that will create dissimilar infrastructure you will have to remember for years (in the event of a failure). Like me you have also heard that you can replace your drives one at a time with larger drives and with the successful replacement of the last drive the array will magically expand in size.

The long/short of my migration:

Whenever you turn your system on ZFS will automatically find your array drives wherever they are and form the array on boot-up. For my migration I bought an external eSata dock (one of the ones where you pop the drive in the top).

For each drive replacement I followed the following procedure.

1. Pop open a shell, become root. (I modded my permissions so pfexec works for me. I show how to do this in another post here on the blog. You can SU if you like)  $pfexec bash will give you a root shell. Get a status of my pool and make note of the device names (shown in bold).

#zpool status

NAME        STATE     READ WRITE CKSUM
mypool    ONLINE       0     0     0
raidz1-0  ONLINE       0     0     0
c9t4d0
ONLINE       0     0     0
c9t3d0
ONLINE       0     0     0
c9t2d0
ONLINE       0     0     0
c9t5d0
ONLINE       0     0     0

2. Shut down the machine.

3. Remove the drive I plan to replace from it’s current location (bay, sata, power, et al)

4. Place that drive into the eSata dock

5. Put the new larger drive in the place of the old drive.

6. ZFS worked out where the old drive was on boot up.

7. Become root, look at the devices in the system with the format command (note the ctrl-d will get you out of the format command). As you see one of the devices that was in my zpool before I swapped drives is now one of the new 2tb drives I’m putting into the pool. From running the format command before I put a drive into the eSata dock I know that any drive in the dock will be c7t513d0, but you could have run before and after format commands to look for the changes. Do be careful and make sure you know where your old and new drives are before the next step though…

#format

Searching for disks…done

AVAILABLE DISK SELECTIONS:
0. c7t512d0 <ATA¬†¬†¬† -WDC WD2500AAKS–0953 cyl 30398 alt 2 hd 255 sec 63>
/pci@0,0/pci8086,3a42@1c,1/pci1458,b000@0/disk@200,0
1. c7t513d0 <ATA-SAMSUNG HD103UI-0953-931.51GB>
/pci@0,0/pci8086,3a42@1c,1/pci1458,b000@0/disk@201,0
2. c9t0d0 <ATA-WDC WD6401AALS-0-3B01-596.17GB>
/pci@0,0/pci1458,b005@1f,2/disk@0,0
3. c9t1d0 <ATA-WDC WD6401AALS-0-3B01-596.17GB>
/pci@0,0/pci1458,b005@1f,2/disk@1,0
4. c9t2d0 <ATA-WDC WD20EARS-00M-AB51-1.82TB>
/pci@0,0/pci1458,b005@1f,2/disk@2,0
5. c9t3d0 <ATA-WDC WD20EARS-00M-AB51-1.82TB>
/pci@0,0/pci1458,b005@1f,2/disk@3,0
6. c9t4d0 <ATA    -WDC WD20EARS-00-AB51 cyl 60798 alt 2 hd 255 sec 252>
/pci@0,0/pci1458,b005@1f,2/disk@4,0
7. c9t5d0 <ATA-WDC WD20EARS-00M-AB51-1.82TB>
/pci@0,0/pci1458,b005@1f,2/disk@5,0
Specify disk (enter its number):
^D

8. This was an interesting little annoyance. It seems that the zpool replace command would only work after a zpool status command was run. Running the replace without running the status first gives you the following.

#zpool replace mypool c7t513d0 c9t4d0
cannot replace c7t513d0 with c9t4d0: no such device in pool

So we know we need to run a status first then follow it with the replace command…

#zpool status mypool

pool: mypool
state: ONLINE
scan: scrub canceled on Sat Jan 15 20:56:30 2011
config:

NAME          STATE     READ WRITE CKSUM
mypool      ONLINE       0     0     0
raidz1-0    ONLINE       0     0     0
c7t513d0  ONLINE       0     0     0
c9t3d0    ONLINE       0     0     0
c9t2d0    ONLINE       0     0     0
c9t5d0    ONLINE       0     0     0

errors: No known data errors

#zpool replace mypool c7t513d0 c9t4d0

9. Run another status so you know what is going on

#zpool status mypool

pool: mypool
state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Sat Jan 15 21:19:26 2011
64.4M scanned out of 3.07T at 8.05M/s, 111h4m to go
15.5M resilvered, 0.00% done
config:

NAME             STATE     READ WRITE CKSUM
mypool         ONLINE       0     0     0
raidz1-0       ONLINE       0     0     0
replacing-0  ONLINE       0     0     0me
c7t513d0   ONLINE       0     0     0
c9t4d0     ONLINE       0     0     0  (resilvering)
c9t3d0       ONLINE       0     0     0
c9t2d0       ONLINE       0     0     0
c9t5d0       ONLINE       0     0     0

errors: No known data errors

10. When the process is complete I believe it is advisable to scrub the drives to ensure all is well. #zpool scrub mypool This will also take a while and you can check on the status of the scrub with #zpool status mypool

Notes:

  • When replacing a drive Zpool status will show long estimated times as the 111 hours in red above. The numbers kept increasing for at least 2 hours and actually made it up to 423 hours remaining, but after 2 to 3 hours data actually started moving and the estimates became much more realistic. This was true for each drive I replaced. I can tell you than to complete a 4 drive RaidZ1 array ~85% full took about 12 hours per drive.
  • One crazy note‚Ķ My server shut down current connections and failed to open the console on the machine during the copy. It started to fail all connection attempts with out of memory errors‚Ķ Not good! Maybe I should not have been running virtual machines while it was resilvering on another pool‚Ķ Dunno, but it was definitely strange. The resilver succeeded, and the machine did let me in after a couple of hours. I did realize that after installing Oracle Solaris 11 Express I had forgotten to limit ZFS ARC Cache (which I had done before: good reference here ZFS_Evil_Tuning_Guide#Limiting_the_ARC_Cache. So before the last drive swap I set the ZFS ARC Cache limit to 7 gigs of memory via the following “set zfs:zfs_arc_max = 0x1C0000000”.

Warning:

  • Remember that in a RaidZ1 array any loss of 2 drives at one time will loose you the entire array! I know I’m paranoid, but I have lost raid 5 arrays this way in the past, so imagine the following: You are upgrading a multi-drive RaidZ1 array. If you did not precondition the drives (have them powered up and drive testing over a few days – most home users do not do this), you will have more than one drive in the array that has been spinning for less than 24 hours. My experience with drive failures is as follows.
    • If a drive does not make it past power on you are OK, you stop migration and get a different drive‚Ķ no problem.
    • Next hurdle, the drives that fail within 48 hours, should still be a low percentage, but there will be some.
    • The final more insidious are the drives that go flaky and start loosing sectors, then fail. This usually takes a few weeks.

Since most of the failures are when drives are relatively new, the odds of having two new drives in an array fail at the same time are far greater than the odds of having two simultaneous failures in a seasoned array.So the average home user will probably get a rack of 4 new huge hard drives on their front porch and run to the server and start swapping out their array. Having all brand new drives in the array, the odds that two will fail in the next week are FAR greater than the odds that you will experience two simultaneous failures after the system has been spinning for a week and even less after a month.

  • Some strategies to consider as you expand your home ZFS RaidZ1 array:
  • Expand safely. Replace one drive a week with the newer drive, or alternately season drives in another system for a week before you start putting them into your production array.
  • As long as you have not replaced the last larger drive, each drive is still held to the size dictated by your original array. You can avoid having to have spares of the new larger drive size by keeping your old drives and swapping them back in the event of a failure (until the last drive is replaced and ZFS starts using the full size of the drives).
  • I VERY highly advise weekly scrubbing of the home array. Monitoring the ‘zpool status’ after scrubs is the easiest way I know of to identify a flaky drive that is loosing sectors. An easy way do a weekly scrub is by adding a shell script to your crontab as follows:
    • I have the following line for each of my pools in a shell file I call zfsmaint.sh “# zpool scrub <my zfs pool name>
    • ” You can add this to your crontab by:¬†¬† “#crontab -e”
      and adding the following (of course replacing the <your home dir> and the zpool command above should be in a shell script called zfsmaint.sh in your homedir):
      0 23 * * 1 /export/home/<your home dir>/zfsmaint.sh
      If you are having problems with the VI editor, please go look up VI commands on the web.

General:

Excellent ZFS Reference: ZFS_Best_Practices_Guide

Future wishes…

When I started my ZFS array there was no RaidZ2 or RaidZ3 (double/triple redundancy), but now there is… Sun never built an upgrade path I really hope Oracle will see this as an issue and make an upgrade path available. At the trivial cost of another disk I would like to move to a 2 drive redundant array without having to build a whole extra array to move it through.

UPDATE:

I wanted to make this walk through for everyone out there as a compilation of all the individual blogs/guides I had to use to perform the task. After all was said and done. It did not work. Apparently Oracle broke the auto-expand in Solaris 11. I went through the steps of setting the pool auto-expand property and trying to force the pool to expand with the new ‘zpool online -e’ command. Nothing worked. So I ended up copying my data to another pool, creating a new RaidZ2 pool (that I wanted anyway) and copying the data back. This was done via the zfs send/recv function via SSH to another server. After playing around the command line to do this is as follows:

Create a snapshot in your local machine via

zfs snapshot <mypool>/<filesystem>@<shapshot>

so

# zfs shapshot tank/myshare@today

Then my destination backup server was at 192.168.1.67. I created a pool in it called tank2. zfs automatically copied the snapshot and created a myshare filesystem and snapshot in tank2.

zfs send <source_pool>/<source_filesystem>@<snapshot> | ssh <account>@<server_ip> pfexec /sbin/zfs recv <dest_pool>/<dest_filesystem>@<dest_snapshot>

or

# zfs send tank/myshare@today | ssh myaccount@192.168.1.67 pfexec /sbin/zfs recv tank2/myshare@today

After I copied all the filesystems to the new server I did a scrub on the new server to ensure the drives/data were good. Then destroyed the pool on the original server. Created the new pool (which now filled up the drives) and copied everything back. From the console on the new server and the IP address of the oldserver is 192.168.1.68

# zfs send tank2/myshare@today | ssh myaccount@192.168.1.67 pfexec /sbin/zfs recv tank/myshare@today

When it is all moved, scrub the pool and bob’s your uncle… ūüôā

Upgrading Opensolaris snv_130 to Oracle Solaris Express snv_151a

Modified for Solaris 11

Upgrade to snv_151A from snv_130.

Ok, so my path to upgrade to the new Oracle Solaris Express was blocked as I could not get my snv_130 to upgrade to snv_134 so I could perform the upgrade. I was left having to perform a fresh install and re-import of my zpools. This is a quick overview of what I did. I’m not a Solaris guru by any means, and the walk through below is a bit spartan, but I thought I would get it out there to see if it would be of help to anyone else who had setup OpenSolaris as a sweet ZFS File/Print/Virtualization server. Let me know if you have any questions…

1. Export all but boot Zpools on old machine

#zpool export -f <pool name>

2. make sure to copy or move current shell scripts from <user home> dirs. Make sure you copy the current crontab, to be safe copy group and passwd files from the etc dir. If you run VirtualBox you want to be very sure you copy the .VirtualBox directory to a place where you will be able to get to it after the install.

3. Install Opensolaris on new HD

4. to get stuff to work had to get around GUI root expired passwd bug in Solaris 11 by popping open a cli and resetting root passwd with ‘#passwd root’

4. edit pam.conf – I’m not sure if this is still needed, but It used to be required under OpenSolaris and it didn’t hurt. ūüôā

#sudo gedit /etc/pam.conf

add the following line to the end of the pam.conf

other password required pam_smb_passwd.so.1 nowarn

5. Fix pfexec as I used it everywhere… http://blogs.sun.com/observatory/entry/sudo

To do this I had to $ sudo usermod -P “Primary Administrator” <username>

(or you can do it in the gui).

6. check the status of the cifs server

# svcs smb/server

7. turn the cifs server on

# svcadm enable -r smb/server

I get an error saying that the ‚Äúsvc:/milestone/network depends on svc:/network/physical, which has multiple instances.‚ÄĚ. No worries for now though. Checking the service (step 6, says it is running).

8. Join the workgroup

#smbadm join -w <workgroup>

9. Add all the original users to the system, add all the same groups as used before, reset all passwords with passwd command. (I use ACL access controls so I needed the same user/group structure. You could also specify the userid’s as the old pool will come back with userID’s instead of user names, but after you go and touch all the ACL’s again it will straighten it all out.

9. Import the zpool

#zpool import <pool name>

10. Check the shares

# sharemgr show -vp

Reset the permissions on all the pool drives, give the system a reboot and you are good to go

Now on to VirtualBox… Download VirtualBox and install. Since I keep all my machines on a zpool all I had to do was copy the .VirtualBox directory from the home dir of the user who it was installed under last time. This was done before I reinstalled as noted above. So after everything above was done I copied the .VirtualBox directory into my user’s home dir then installed VirtualBox. The xml files in the directory held the pointers to the machines and hard drive files on the zpool so everything installed and ran out of the box.

    Al Gore – Ethanol, Politics, Pandering, and Other Inconvenient Truths

    Ed Wallace has an excellent article describing the current political climate for the increase in the percentage of federally subsidized ethanol in gasoline… via BusinessWeek.com

    My favorite excerpt:

    But ethanol’s newest public-relations problem actually started in the
    last eight days of November. Having been fervidly pro-ethanol in the
    last decade of his political career, former Vice-President Al Gore
    reversed course and apologized for supporting ethanol. Of course, Gore’s reason for taking his original position was perfectly understandable‚ÄĒto a politician. As he told energy conference attendees in Athens, Greece, “One of the reasons I made that mistake is that I paid particular attention to the farmers in my home state of Tennessee, and I had a certain fondness for the farmers of Iowa because I was about to run for President.”

    Duplicate Contacts and the Droid X

    Ok, I use MarkSpace’s Missing Sync for Android on my Snow Leopard Mac to synchronize everything to my Verizon Droid X Android phone. About 3 months ago it developed a duplicate for most every contact I have. The duplicates are not on my Mac and they are not showing from another sync source.

    I found this on the net and it seems to work pretty well:

    Go to “Manage Applications” and select Contacts Storage and Contacts apps and clear the data for both then reboot the Droid. The droid should come back up with only the default Verizon contacts (e.g., #BAL, #MIN, Voice Mail, etc.). Then resync the phone with Missing Sync for Android (MarkSpace; it’s in the Market) and Bob’s your uncle… Only one set of contacts…

    Be sure your Mac’s contact list is up to date before you do this because it will wipe out everything on your phone.

    Oil filter cross reference for Generac 70185

    For the Guardian Air Cooled 7 kW 12 kW and 15kW generators (Models 04758, 04759, and 04760 respectively) all models call for a Generac 70185 Oil filter. Since I wanted to change my oil and am not sure where to get the Generac filters I went looking for common crosses. I’m still looking for a cross for the OC 8127 air filter.

    Here is a handy list of oil filters that cross to the Generac 70185.

    Puralator L14476
    Fram PH4967
    ACDelco PF1233
    WIX 51394
    Amsoil EAO09
    STP S4967
    K&N HP-1003
    Napa Gold 1394 or 21394
    Pennzoil PZ39
    Valuecraft V4967
    Mopar FE308

    It appears to be the same filter used in the 2001 Corolla and Echo.

    American Home Shield – What a rip off!

    I just realized that I did not include my only service call in the American Home Shield post regarding the Pool Warranty exclusions.

    To set the frame, this was last summer. The temperature was above 100 degrees every day. My air conditioner would not cool the house. It would finally get the house down to 80 degrees around 3 in the morning (when it was in the 70’s outside). The air conditioner itself was only 2 years old, so definitely not a candidate for replacement.

    AHS sent a technician to my home, it took them 4 days to do so. The service company they sent never called, and never set up an appointment. When I called the service companies office to find out when someone was coming I was told they would be there that same day and that I needed to be at the house to work with them or they would cancel my call. I don’t know about everyone, but for me it is difficult to drop everything at the office in the middle of the morning to run home and meet the technician, but I did so anyway.

    So the technician shows up. Never looks at the air handler, never measures inlet/outlet temp, he just goes to the condenser and checks gas pressure. He does not have a temperature probe to tell him the temperature of the coolant lines (which according to my info you have to have the specific temperature to know what the gas pressure needs to be). Then based on the pressure check, he announced that the AC unit was “doing all it could do” and repeatedly cited that it was “awful hot out”. Then he handed me a the bill for his work, which amounted to $60. Not being a complete idiot I had already measured temperature drop, wet bulb temp and dry bulb temp in my house before I had ever called American Home Shield, and that the unit was not “doing all it could do”. When I gave the service representative the details of my analysis, he was completely unimpressed and started alternating between the mantras of “doing all it could do” and “awful hot out”. He also took the time to inform me that if I called AHS for another technician, he would come back out, but the answer and the cost would be the same. So he went on his way.

    I can tell you that I have spent around 15 years of my life in some form or another of electro-mechanical service industry. I know technicians, I know troubleshooting, and I can generally figure out when the drivers for a technician are not lined up with the best interest of the consumer. It was not difficult to come to the conclusion that the guy was just hitting as many homes as he could and getting out the door.

    So I called the company whose sticker was on the side of the air conditioner, since they were the original installers of the system they agreed to come out that day. They arrived in less than 2 hours after the AHS technician left. They looked the system over, found that the unit was 2lb low on Puron (the name for the new freon/coolant) and found an air leak in the intake of the air handler which was sucking hot air from the attic into the system. Total cost was $210.

    With my air working now I called AHS to see if they would reimburse me for the cost of a technician who actually planned to fix the issue when they arrived and reimburse me for the cost of the technician who had no interest in fixing the issue. They said that they would not reimburse either. If I wanted to continue to pursue the issue I could fax my bill to a number they supplied, but they could not give me the number or name of anyone who I could talk to about it before or after I faxed the document. They also spent the time to assure me that the bill would not be paid and that my $60 service fee would not be refunded. I informed them that my warranty was almost up for renewal and that I would not be renewing. I also let them know that instead of taking my time to participate in their futile claim process, I would wait to join the next class action lawsuit against them.

    After their outright lying about the pool coverage when I signed up for the service and having to pay for technicians who have no interest in actually fixing the issue, I told AHS to stuff their warranty. If I follow up on this rant I think it will be a cost benefit analysis of paying for a home warranty in the first place.

    Make a .iso from a CD or DVD on Snow Leopard

    You can determine the device that is you CD/DVD drive using the following command:

    drutil status

    Vendor Product Rev
    MATSHITA DVD-R UJ-825 DAM5

    Type: CD-ROM Name: /dev/disk1
    Cur Write: 16x CD Sessions: 1
    Max Write: 16x CD Tracks: 3
    Overwritable: 00:00:00 blocks: 0 / 0.00MB / 0.00MiB
    Space Free: 00:00:00 blocks: 0 / 0.00MB / 0.00MiB
    Space Used: 66:55:27 blocks: 301152 / 616.76MB / 588.19MiB
    Writability:

    Now you will need to umount the disk with the following command:

    diskutil unmountDisk disk1

    Now you can write the ISO file with the dd utility:

    dd if=/dev/disk1 of=file.iso

    When finished you will want to remount the disk:

    diskutil mountDisk disk1

    Droid-X DLNA streaming make sure you optimize for streaming in Handbrake…

    In handbrakecli you need to use the –optimize flag

    At least it’s the only way I’ve had any luck.

    Try using Handbrake to make a streaming version with the following command arguments (remember if you have spaces in your file name or path you will have to put the input file and output file in quotes):

    handbrakecli –input –output –format MP4 –markers –x264opts level=30:bframes=0:cabac=0:ref=1:vbv-maxrate=768:vbv-bufsize=2000:analyse=all:me=umh:no-fast-pskip=1:subq=6:8x8dct=0:trellis=0:weightb=0:mixed-refs=0:ref=1:subme=2 –vb 1536 –two-pass –optimize –keep-display-aspect –turbo –audio 1 –aencoder faac –ab 160 –mixdown dpl2 –arate 48 –drc 2.0 –native-language eng –subtitle-forced scan –subtitle scan

    If you are having trouble setting up a DLNA server it’s real easy to pop up mezzmo and it will stream files encoded with the above command line without fail here.

    Mezzmo DLNA server…

    Just as a FYI… Of all the DLNA servers I have played with in the effort to get the Droid-X to stream transcoded media my favorite so far is the Mezzmo DLNA server from Conceiva. The support people are “Johnny on the spot” with bug fixes so far, they actually have a profile set up for the Droid-X, and the features are coming fast and quick. It runs on Windows so transcoding is slower, but the menus etc are better than the rest and so far it has been less finicky than the others I’ve tried.

    DLNA, Droid-X, and wasted time…

    Ok here are my experiences so far getting a DLNA server to transcode to the DroidX:
    I have tried nearly every UP&P/DLNA server out there on virtually every platform (Mac, Linux, Windows).

    Observations so far:

    I’ve put up about 12 different DLNA server/platform combinations, so far there isn’t one that does everything I want. Mezzmo is the closest, but it still doesn’t have a few features I really crave.

    Windows is the slowest at transcoding everything. So my current recommendation is to run in Linux or Mac (Linux is quickest so far)

    The droid-x processor only supports MP4 natively (maybe wmv or some other format that I will not use, but MP4 as far as I’m concerned). That is the only format that will spool from a DLNA server to the phone.

    Problem is that the phone requires the MP4 files to have specific data at the beginning of the file (lookup MP4 ATOMs for more detail). Well it seems that the information for some of the required ATOMs cannot be calculated until the file is converted to MP4 (it is also possible the FFMPEG could but just does not create these up front, since almost everyone uses FFMPEG for transcoding…). Most of the media servers will stream you a MP4 file that does not require transcoding. And most of the media servers will transcode the MP4 files into whatever format your other devices like. So it appears that your best convergence option is to re-encode (and take the quality hit) your video as MP4 so your phone can view it natively, then let your media server transcode for all the other systems (XBOX/TV/DVR/PS3/etc…). Note if you are using handbrake to convert, I have to set the file to enable streaming to get the phone to play it.

    Other than that I’m off to my next level of convergence… The phone captures in .3gp I have one media server on a Linux VM (twonky I believe) that supports DLNA uploads. So I can seamlessly upload video to the server. Unfortunately I haven’t found one of the DLNA servers that will transcode the .3gp files… It would be really nice if I could simply DLNA copy videos and pics from my phone to the DLNA server then have them instantly available via transcode on the other DLNA clients around the house…

    Anybody else have a DLNA that supports uploads or .3gp transcode?