Lifehacker provides some instruction on how to get a little privacy on the web…

December 29th, 2012 by EvilT

Everyone’s Trying to Track What You Do on the Web: Here’s How to Stop Them

I could not get the link to firefox to work, and Do Not Track Plus has changed its name to DoNotTrackMe, but the three primary addons (Adblock Plus, Ghostery, and DoNotTrackMe) are the same for Chrome and Firefox…

For Firefox, I added all through the firefox add-ons listing, then subscribed to Antisocial (after you have loaded the plugins above, clicking on the subscribe link from the antisocial page will automatically add it in your browser plug-in).

Cheap prepaid cell phone service – Tired of the big 4, use their service on other prepaid carriers…

November 3rd, 2012 by EvilT

So I’ve been researching ways to keep unlimited data in the new cellular world. Here are links to some articles on the subject.

Android central rings in on a couple of providers…
http://www.androidcentral.com/prepaid-not-just-burner-phones-anymore

And PC Magazine’s article on the 10 best cheap prepaid phone plans you’ve never heard of.
http://www.pcmag.com/article2/0,2817,2375644,00.asp

Using HDAT2 to fix 1TB drives used in ZFS pool. Drives show only 32mb available.

August 4th, 2012 by EvilT

Samsung HD103UI 1TB drive only reporting 32mb size. Walkthrough of the fix using HDAT2

Ok, so I’ve had a few of the Samsung Spinpoint F1 drives that have been dead because they report as only being 32mb in size. This appears to be due either to an incompatibility with some Intel chipset Gigabyte motherboards (from some reference info), or that I had them formatted as part of a zpool under Solaris ZFS. I’m not sure which, but whatever the cause I could not use them for anything larger than a good sized USB stick.

Well That’s all fixed now.

None of the The Seagate utilities package (Seagate bought Samsung drives a while back, so you cannot find Samsung utilities anymore) Seatools does not reset the Max size no matter what you do. After searching the net over and over I finally found a forum discussion with a link to a utility that fixed the issue. The program is called HDAT2, and it is available at hdat2.com.

Apparently the issue is that the for some reason the max address space for the current user is set to 65134 LBA sectors which translates to ~ 32 megabytes. If you look to the current native area it is 1953525168 which is one terabyte. Below you will find a pictorial howto for fixing this issue with HDAT2.

 

First download hdat2 and put on a CD, boot to the CD with the hard drive you want to fix plugged into the machine. the ISO for HDAT2 is self booting so no worries… 🙂

 

First you can see my drive with a capacity of 33.35MB

Now go and select your drive

In the next menu select “SET MAX (HPA) Menu”

Then select “Set Max Address”

Now scroll down to where it says “New User” and press S to set that value to the new user value.

After saying that ‘Y’es you want to do this you should see this

Now the home screen shows a full 1TB of Storage Goodness!

 

 

 

 

 

 

 

 

Oracle Solaris 11 Express and Link Aggregation.

July 4th, 2012 by EvilT

Note if you are not finding the following link aggregation commands, don’t worry they have just changed in Solaris 11 express.
     ipadm create-if  <has become> ipadm create-ip
     ipadm delete-if <has become> ipadm delete-ip

First things first.
Become root
Next you have to disable NWAM (Network Auto Magic) by:
# netadm enable -p ncp DefaultFixed   (you can reenable it later via: # netadm enable -p ncp Automatic)

I actually unplug the ethernet cables

 Find your physical ethernet devices:
# dladm show-phys

Make sure you don’t have any links:
# dladm show-links

You will have to fix DNS, smart things work after making the changes below, but Firefox and things that rely on basic DNS still don’t work. To Get DNS to work again you will need to do the following (Note DNS is
only partially working. I’ve found links saying I need to edit the
nsswitch.conf, however Oracle has made it a system updated file, so you
need to change the settings via svccfg.
if you cat /etc/nsswitch.conf and do not see hosts: and ipnodes: listed with files and dns, then perform the following…
# svccfg
svc:/network/dns/client:default> select name-service/switch
svc:/system/name-service/switch> setprop config/host = astring: “files dns”
svc:/system/name-service/switch> setprop config/ipnode = astring: “files dns”
svc:/system/name-service/switch> select system/name-service/switch:default
svc:/system/name-service/switch:default> refresh
svc:/system/name-service/switch:default> validate
svc:/system/name-service/switch:default>
# svcadm enable dns/client

#ipadm show-addr to see your actual address

If you cannot ping out do a ‘# netstat -r’ and look for a default route. If you do not see a default route then:
# route -p add default 192.168.1.1 (put your real gateway IP address here)
now a netstat -r should give you a default route. See if you can ping now.

If you can ping verify via ‘# dig slashdot.org’

I found some good DNS info on this site, And here is the Oracle Solaris 11 Site that tells howto for DNS configurations
Here is a site with a simple DNS verification procedure.

______________________________________________________________________________________
Problems with Oracle Solaris 11 Aggregate Link and VirtualBox
Some links:

  • http://docs.oracle.com/cd/E19082-01/819-6990/ggixp/index.html 
  • https://forums.virtualbox.org/viewtopic.php?f=11&t=47453&sid=20c9fc84fda5ea96fbc13451fd881ddd

After building the link aggregation VirtualBox will not be able to use the aggregated network adapter, so you need to build a virtual NIC with dladm. After the adapter is built go into each virtual machine and set it to use the new virtual adapter.
    # dladm create-vnic -l data-link vnic-name
    If the name of your aggregated NIC is agg0 and you want to build a virtual NIC called vnic0 then you would type the following
    # dladm create-vnic -l agg0 vnic0

Patent Reform (Software Style)

June 21st, 2012 by EvilT

The good people at the EFF are heading up an lobbying effort to reform the patent system. I personally believe that one of the big impediments to actually developing and building a product in this country is the legal overhead required to get anything done. A large part of this overhead is patent research and patent litigation.

If you are of a similar mind, just like the EFF, or have other reasons to believe that the patent system is desperately in need of an enema, please stop by and sign their petition.

Logitech Control Center for the Mac – OSX

May 19th, 2012 by EvilT

OK, so for some reason I keep forgetting this. The Mac OSX Logitech control center screws up everything on the Mac. I had just installed it again last weekend and my hot corners stopped working, sub-menus required me to hold down the left click to stay active, and the only way the screen saver would work was based on inactivity timeout.

In hot corner (set for screen saver) or in screen saver test in System Preferences, the screen saver will start for a second or two then stop and return you to the main screen. I played around with it for a week or so then remembered that the Logitech Control Center had caused this exact same issue under Leopard and Snow Leopard. Well seems as though it’s still not fixed. I tried the latest 3.51 this time and it still screwed up the screen saver. If you are having a similar issue go to your Applications-utilities folder and uninstall LCC (Logitech Control Center).

RootKeeper, Droid X2, Root, and the 1.3.418 update

May 12th, 2012 by EvilT

If someone out there has a Droid X2 that may be rooted and the cell provider is trying to push an update on you. Look in the market for an app called Voodoo OTA RootKeeper.

First you would tell it to ‘backup su’ (then a checkbox will be next to the line that says “Protected su copy available”). Then perform a temp un-root, Allow the update to install. When android says the update was successful, launch Voodoo OTA RootKeeper again and tell it to restore/reroot the phone.

Easy as pie…

Ahhhhhh… The smell of fresh science. Pielke introduces a new concept: Linking current events to climate change and the Bullshit button.

March 29th, 2012 by EvilT

Quoted from the Pielke Article:
The full IPCC Special Report on Extremes
is out today, and I have just gone through the sections in Chapter 4
that deal with disasters and climate change. Kudos to the IPCC — they
have gotten the issue just about right, where “right” means that the
report accurately reflects the academic literature on this topic. Over
time good science will win out over the rest — sometimes it just takes a
little while.

A few quotable quotes from the report (from Chapter 4):

  • “There is medium evidence and high agreement that long-term trends
    in normalized losses have not been attributed to natural or
    anthropogenic climate change”
  • “The statement about the absence of trends in impacts attributable
    to natural or anthropogenic climate change holds for tropical and
    extratropical storms and tornados”
  • “The absence of an attributable climate change signal in losses also holds for flood losses”

This quote, however fun it is, does not give you enough info to play with…

Please read the entire article here

Then when done have fun with the rest of his blog… 😉

Vizio XVT553SV the Onkyo series of receivers and the elusive Audio Return Channel

January 20th, 2012 by EvilT

It seems to be rather difficult to find details on how you hook up the Vizio TV to any receiver. I uncovered this little gem this morning and I think it will solve my issues. The only port on the Vizio XVT553SV that supports Audio Return Channel on the HDMI CEC is port 1. Make sure your receiver is plugged into port one…

Will update after I get it all tested…

OSX Lion, Solaris 11 Express, CIFS share errors, NFS share errors, and the AFP napp-it solution

January 8th, 2012 by EvilT

Ok, after all the problems and wrangling trying to get CIFS shares to work reliably with OSX Lion (problems writing to CIFS shares, mainly error 50 etc…). I’ve installed napp-it on the server along with netatalk (there is a default netatalk install in the napp-it howto on the site). I turned off all the SMB/CIFS, iSCSI, and NFS shares (except what I need for my Winderz friends) and moved all my shares to AFP.

Since Lion will no longer let you use  USB drives connected to an Airport Extreme I had to come up with something (I had wrestled with trying to set up time machine on iSCSI, but issues with compatibility from old iSCSI initiators and the cost of the new ones drove me away). So far everything works perfectly, even time machine.

I may post howto’s but since napp-it is reasonably easy to use (a little clunky, but everything is there) I will wait until I see some questions here before I go to the trouble.

Representative Mike Kelly from PA

January 1st, 2012 by EvilT

I don’t know the man, but I like the sentiment.

<rant>

I’m beginning to think that a major issue with congress is a lack of specific goals and accountability. Where else could you have a job that requires you to finish a task at a specific time, but allows you to extend that time, alter the requirements for the task, or just ignore the task because people won’t like the answer? It would not be an effective way to run any organization.

I am sick of having a government that refuses to be financially responsible (or even to run the country) because they are too busy with partisan political posturing for the media. Many of our elected representatives are currently allowed to play at brinksmanship in order to ensure continuing press coverage (free advertising complete with rhetoric) for their parties. This becomes problematic when it is granted a higher priority than actually getting something done. So if there isn’t time to both waggle your finger in the press and produce legislation, since there is little accountability, Congress gets the option to pick which priority to service. Lately the balance is pretty slanted to finger waggling…

However, I do enjoy the fact that, if they cannot agree on legislation, they cannot spend more of my money than they currently  do…

</rant>

Virtualbox Ubuntu Linux guest shares via Guest Additions

December 21st, 2011 by EvilT

Ok, I have Linux VM’s running on a Solaris host. One thing I can never find anywhere is instructions on how to mount the shares. It is actually very easy to do.

1. Install the guest additions in the guest.

2. Select the directories you want to share to the guest via the VirtualBox control panel (Settings->Shared Folders).

3. And now the missing piece. On the linux system make your user account part of the vboxsf group, any group that should be able to get to these shares

 

The shares will appear in the /media folder on my Ubuntu guest prefaced with a “sf_”. So if I share the folder to the guest as Videos, I would find a /media/sf_Videos folder. If you do not add the account in question to the vboxsf group then you will see no files in the folder.

I first ran into this setting up a Plex media server in a guest. Plex could not see any files in the folder, but when you add the plex account to the /etc/group file(example below) it works like a champ.

vboxsf:x:1001:plex

Solaris Virtualbox host fails install of extension pack

December 21st, 2011 by EvilT

This problem started after I upgraded to the latest Oracle Solaris 11 and tried to install the VirtualBox 4.1.6 update.  When I would go to add the new Oracle_VM_VirtualBox_Extension_Pack I would enter a username and password and the install fails.

The current workaround is to use VBoxManage from the command line to install the extension pack.

VBoxManage extpack install <extension file>


Watch setting manual for the Citizen E812 movement.

December 21st, 2011 by EvilT

I’m putting it here so I can quit looking for it every time I need it. 😉

Citizen E812 manual

John Deere Oil Filter Cross Reference – D130 42″ cut

September 17th, 2011 by EvilT

Use 10W30 Oil

OEM Filter GY20577
John Deere Filter AM125424
Briggs and Stratton 492932
Purolator L10241
Fram PH8170
Napa Gold 1056
WIX 51056
Kohler 12 050 01-S

How to see the recently discovered Supernova in your own backyard…

September 6th, 2011 by EvilT

How to See the Recently Discovered Supernova

One of my favorite quotes of all time.

February 3rd, 2011 by EvilT

You can’t always write a chord ugly enough to say
what you want to say, so sometimes you have to rely on a giraffe filled
with whipped cream.

Frank Zappa

Setting up COMSTAR iSCSI target on Oracle Solaris 11 Express

January 30th, 2011 by EvilT

I found this post on The Grey Blog, which is a good starting point. One thing I noted is that the iSCSI target service does not appear to be loaded by default in Oracle’s Solaris 11 Express. The telltale sign is when you try to issue an itadm command as described below it cannot find the command. So give it a quick
# pkg install network/iscsi/target

Packages to install:     1
Create boot environment:    No
Services to restart:     1
DOWNLOAD                                  PKGS       FILES    XFER (MB)
Completed                                  1/1       14/14      0.2/0.2

PHASE                                        ACTIONS
Install Phase                                  48/48

PHASE                                          ITEMS
Package State Update Phase                       1/1
Image State Update Phase                         2/2

Then start the service:  # svcadm enable -r iscsi/target:default

Then: # svcs \*scsi\*

should give you:

STATE          STIME    FMRI
online         Jan_29   svc:/network/iscsi/initiator:default
online         12:16:26 svc:/network/iscsi/target:default

The post on setting up COMSTAR iSCSI is below.

The Grey Blog: Setting up Solaris COMSTAR and an iSCSI target for a ZFS volume

Setting up Solaris COMSTAR and an iSCSI target for a ZFS volume
COMSTAR stands for Common Multiprotocol SCSI Target: it basically is a framework which can turn a Solaris host into a SCSI target. Before COMSTAR made its appearance, there was a very simple way to share a ZFS file system via iSCSI: just setting the shareiscsi property on the file system was sufficient, such as you do to share it via NFS or CIFS with the sharenfs and sharesmb properties.

COMSTAR brings a more flexible and better solution: it’s not as easy as using those ZFS properties, but it is not that hard, either. Should you need more complex setup and features, COMSTAR includes a wide set of advanced features such as:

Scalability.
Compatibility with generic host adapters.
Multipathing.
LUN masking and mapping functions.

The official COMSTAR documentation is very detailed and it’s the only source of information about COMSTAR I use. If you want to read more about it, please check it out.
Enabling the COMSTAR service
COMSTAR runs as a SMF-managed service and enabling is no different than usual. First of all, check if the service is running:

# svcs \*stmf\*
STATE STIME FMRI
disabled 11:12:50 svc:/system/stmf:default

If the service is disable, enable it:

# svcadm enable svc:/system/stmf:default

After that, check that the service is up and running:

# svcs \*stmf\*
STATE STIME FMRI
online 11:12:50 svc:/system/stmf:default

# stmfadm list-state
Operational Status: online
Config Status : initialized
ALUA Status : disabled
ALUA Node : 0

Creating SCSI Logical Units
You’re not required to master the SCSI protocols to setup COMSTAR but knowing the basics will help you understand the next steps you’ll go through. Oversimplifying, a SCSI target is the endpoint which is waiting client (initiator) connections. For example, a data storage device is a target and your laptop may be an initiator. Each target can provide multiple logical units: each logical unit is the entity that performs “classical” storage operations, such as reading and writing from and to disk.

Each logical unit, then, is backed by some sort of storage device; Solaris and COMSTAR will let you create logical units backed by one of the following storage technologies:

A file.
A thin-provisioned file.
A disk partition.
A ZFS volume.

In this case, we’ll choose the ZFS volume as our favorite backing storage technology.

Why ZFS volumes?
One of the wanders of ZFS is that it isn’t just another filesystem: ZFS combines the volume manager and the file system providing you best of breed services from both world. With ZFS you can create a pool out of your drives and enjoy services such as mirroring and redundancy. In my case, I’ll be using a RAID-Z pool made up of three eSATA drives for this test:

enrico@solaris:~$ zpool status tank-esata
pool: tank-esata
state: ONLINE
scrub: scrub completed after 1h15m with 0 errors on Sun Feb 14 06:15:16 2010
config:

NAME STATE READ WRITE CKSUM
tank-esata ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
c7t0d0 ONLINE 0 0 0
c8t0d0 ONLINE 0 0 0
c8t1d0 ONLINE 0 0 0

errors: No known data errors

Inside pools, you can create file systems or volumes, the latter being the equivalent of a raw drive connected to your machine. File systems and volumes use the storage of the pool without any need for further partitioning or slicing. You can create your file systems almost instantly. No more repartition hell or space estimation errors: file systems and volumes will use the space in the pool, according to the optional policies you might have established (such as quotas, space allocation, etc.)

ZFS, moreover, will let you snapshot (and clone) your file systems on the fly almost instantly: being a Copy-On-Write file system, ZFS will just write modification on the disk, without any overhead and when the blocks are no more referenced, they’ll be automatically freed. ZFS snapshot are Solaris a much optimized version of Apple’s time machine.

Creating a ZFS volume
Creating a volume, provided you’ve already have a ZFS pool, it’s as easy as:

# zfs create -V 250G tank-esata/macbook0-tm

The previous command creates a 250GB volume called macbook0-tm on pool tank-esata. As expected you will find the raw device corresponding to this new volume:

# ls /dev/zvol/rdsk/tank-esata/
[…snip…] macbook0-tm […snip…]

Creating a logical unit
To create a logical unit for our ZFS volume, we can use the following command:

# sbdadm create-lu /dev/zvol/rdsk/tank-esata/macbook0-tm
Created the following LU:

GUID DATA SIZE SOURCE
——————————– ——————- —————-
600144f00800271b51c04b7a6dc70001 268435456000 /dev/zvol/rdsk/tank-esata/macbook0-tm

Logical units are identified by a unique ID, which is the GUID shown in sbdadm output. To verify and get a list of the available logical units we can use the following command:

# sbdadm list-lu
Found 1 LU(s)

GUID DATA SIZE SOURCE
——————————– ——————- —————-
600144f00800271b51c04b7a6dc70001 268435456000 /dev/zvol/rdsk/tank-esata/macbook0-tm

Indeed, it finds the only logical unit we created so far.

Mapping the logical unit
The logical unit we created in the previous section is not available to any initiator yet. To make your logical unit available, you must choose how to map them. Basically, you’ve got two choices:

Mapping it for all initiators on every port.
Mapping it selectively.

In this test, taking into account that it’s a home setup on a private LAN, I’ll go for simple mapping. Please, choose carefully your mapping strategy according to your needs. If you need more information on selective mapping, check the official COMSTAR documentation.

To get the GUID of the logical unit you can use the sbdadm or the stmfadm commands:

# stmfadm list-lu -v
LU Name: 600144F00800271B51C04B7A6DC70001
Operational Status: Offline
Provider Name : sbd
Alias : /dev/zvol/rdsk/tank-esata/macbook0-tm
View Entry Count : 0
Data File : /dev/zvol/rdsk/tank-esata/macbook0-tm
Meta File : not set
Size : 268435456000
Block Size : 512
Management URL : not set
Vendor ID : SUN
Product ID : COMSTAR
Serial Num : not set
Write Protect : Disabled
Writeback Cache : Enabled
Access State : Active

To create the simple mapping for this logical unit, we run the following command:

# stmfadm add-view 600144f00800271b51c04b7a6dc70001

Configuring iSCSI target ports
As outlined in the introduction, with COMSTAR a new iSCSI transport implementation has been introduced that replaces the old implementation. Since the two implementation are incompatible and only one can run at a time, please check which one you’re using. Nevertheless, consider switching to the new implementation as soon as you can.

The old implementation is registered as the SMF service svc:/system/iscsitgt:default and the new implementation is registered as svc:/network/iscsi/target.

enrico@solaris:~$ svcs \*scsi\*
STATE STIME FMRI
disabled Feb_03 svc:/system/iscsitgt:default
online Feb_03 svc:/network/iscsi/initiator:default
online Feb_16 svc:/network/iscsi/target:default

If you’re running the new COMSTAR iSCSI transport implementation, you can now create a target with the following command:

# itadm create-target
Target iqn.1986-03.com.sun:02:7674e54f-6738-4c55-d57d-87a165eda163 successfully created

If you want to check and list the targets you can use the following command:

# itadm list-target
TARGET NAME STATE SESSIONS
iqn.1986-03.com.sun:02:7674e54f-6738-4c55-d57d-87a165eda163 online 0

Configuring the iSCSI target for discovery
The last thing left to do to have your iSCSI target configured for discovery. Discovery is the process which an initiator use to get a list of available targets. You can opt for one of the three iSCSI discovery methods:

Static discovery: a static target address is configured.
Dynamic discovery: targets are discovered by initiators using an intermediary iSNS servers.
SendTargets discovery: configuring the SendTargets option on the initiator.

I will opt for static discovery because I’ve got a very small number of targets and I want to control which initiators connect to my target. To configure static discovery just run the following command:

# devfsadm -i iscsi

Next steps
Configuring a target is a matter of few commands. It took me much more time to write down this blog post than having my COMSTAR target running.

The next steps wil be having an initiator connect to your target. I detailed how to configure a Mac OS/X instance as an iSCSI initiator on another post.

Lifted from Genuix.org site. Settings for Solaris CIFS shares etc…

January 23rd, 2011 by EvilT

I’m only copying this here for now since much of the OpenSolaris documentation I’ve relied on over the years has become unfindable. All the Sun doc links in Google now point to a single Oracle Sun page that seems to get me nowhere… 🙁

Getting Started With the Solaris CIFS Service – Genunix

How to Join a Workgroup

Start the CIFS Service.

# svcadm enable -r smb/server

If the following warning is issued, you can ignore it:
svcadm: svc:/milestone/network depends on svc:/network/physical, which has multiple instances

Join the workgroup.

# smbadm join -w workgroup-name

The default workgroup name is WORKGROUP. If you want to use the default, skip this step.

Establish passwords for CIFS workgroup users.

CIFS does not support UNIX or NIS style passwords. The SMB PAM module is required to generate CIFS style passwords. When the SMB PAM module is installed, the passwd command generates additional encrypted versions of each password that are suitable for use with CIFS.

Install the PAM module.

Add the following line to the end of the /etc/pam.conf file to support creation of an encrypted version of the user’s password for CIFS.

other password required pam_smb_passwd.so.1 nowarn

Note – After the PAM module is installed, the passwd command automatically generates CIFS-suitable passwords for new users. You must also run the passwd command to generate CIFS-style passwords for existing users.

Only a privileged user can modify the pam.conf file, for example:
# pfexec gedit /etc/pam.conf

Create local user passwords.
# passwd username

(Optional) Verify your Solaris CIFS service configuration.

Download the cifs-chkcfg script.

Run the cifs-chkcfg script.

# cifs-chkcfg

Note – The cifs-chkcfg script does not currently verify the Kerberos configuration.

How to Join an AD Domain
Before You Begin

This task describes how to join an AD domain and pertains to at least SXCE Build 82.

Determine your name mapping strategy and, if appropriate, create Solaris-to-Windows mapping rules. See “Creating Your Identity Mapping Strategy” in the Solaris CIFS Administration Guide.

Creating name-based mapping rules is optional and can be performed at any time. By default, identity mapping uses ephemeral mapping instead of name-based mapping.

Start the CIFS Service.
# svcadm enable -r smb/server

Ensure that system clocks on the domain controller and the Solaris system are synchronized.

For more information, see Step 3 of “How to Configure the Solaris CIFS Service in Domain Mode” in the Solaris CIFS Administration Guide.

Join the domain.

# smbadm join -u domain-user domain-name

You must specify a user that has appropriate access rights to perform this step.

Restart the CIFS Service.
# svcadm restart smb/server

(Optional) Verify your Solaris CIFS service configuration.

Download the cifs-chkcfg script.

Run the cifs-chkcfg script.

# cifs-chkcfg

Note – The cifs-chkcfg script does not currently verify the Kerberos configuration.

How to Create a CIFS Share

Enable SMB sharing for the ZFS file system.

Enable SMB sharing for an existing ZFS file system.

# zfs set sharesmb=on fsname

For example, to enable SMB sharing for the ztank/myfs file system, type:

# zfs set sharesmb=on ztank/myfs

Note – The resource name for the share is automatically constructed by the zfs command when the share is created. The resource name is based on the dataset name, unless you specify a resource name. Any characters that are illegal for resource names are replaced by an underscore character (_).

To specify a resource name for the share, specify a name for the sharesmb property, sharesmb=name=resource-name.

For example, to specify a resource name of myfs for the ztank/myfs file system, type:
# zfs set sharesmb=name=myfs ztank/myfs

Create a new ZFS file system that enables SMB sharing.

When creating a ZFS file system to be used for SMB file sharing, set the casesensitivity option to mixed to permit a combination of case-sensitive and case-insensitive matching. Also, set the nbmand option to enforce mandatory cross-protocol share reservations and byte-range locking.

# zfs create -o casesensitivity=mixed -o nbmand=on -o sharesmb=on fsname

For example, to create a ZFS file system with SMB sharing and nbmand enabled for the ztank/yourfs file system, type:

# zfs create -o casesensitivity=mixed -o nbmand=on -o sharesmb=on ztank/yourfs

To specify a resource name for the share, specify a name for the sharesmb property, sharesmb=name=resource-name.

For example, to specify a resource name of yourfs for the ztank/yourfs file system, type:
# zfs create -o casesensitivity=mixed -o nbmand=on -o sharesmb=name=yourfs ztank/yourfs

Verify how the new file system is shared.

# sharemgr show -vp

Now, you can access the share by connecting to \\solaris-hostname\share-name. For information about how to access CIFS shares from your client, refer to the client documentation.

Quick notes on expanding a ZFS RaidZ Pool – Solaris 11 Express. Howto (see bottom for update)

January 16th, 2011 by EvilT

So you have what was once a gargantuan ZFS RaidZ1 array, but the family videos, pictures, plus the super cool time windowed (via snapshot) backup method you have created for all your local machines have stuffed up the pool completely. Like me you view just dumping another pair of mirrored drives into the pool to be a hokey kluge that will create dissimilar infrastructure you will have to remember for years (in the event of a failure). Like me you have also heard that you can replace your drives one at a time with larger drives and with the successful replacement of the last drive the array will magically expand in size.

The long/short of my migration:

Whenever you turn your system on ZFS will automatically find your array drives wherever they are and form the array on boot-up. For my migration I bought an external eSata dock (one of the ones where you pop the drive in the top).

For each drive replacement I followed the following procedure.

1. Pop open a shell, become root. (I modded my permissions so pfexec works for me. I show how to do this in another post here on the blog. You can SU if you like)  $pfexec bash will give you a root shell. Get a status of my pool and make note of the device names (shown in bold).

#zpool status

NAME        STATE     READ WRITE CKSUM
mypool    ONLINE       0     0     0
raidz1-0  ONLINE       0     0     0
c9t4d0
ONLINE       0     0     0
c9t3d0
ONLINE       0     0     0
c9t2d0
ONLINE       0     0     0
c9t5d0
ONLINE       0     0     0

2. Shut down the machine.

3. Remove the drive I plan to replace from it’s current location (bay, sata, power, et al)

4. Place that drive into the eSata dock

5. Put the new larger drive in the place of the old drive.

6. ZFS worked out where the old drive was on boot up.

7. Become root, look at the devices in the system with the format command (note the ctrl-d will get you out of the format command). As you see one of the devices that was in my zpool before I swapped drives is now one of the new 2tb drives I’m putting into the pool. From running the format command before I put a drive into the eSata dock I know that any drive in the dock will be c7t513d0, but you could have run before and after format commands to look for the changes. Do be careful and make sure you know where your old and new drives are before the next step though…

#format

Searching for disks…done

AVAILABLE DISK SELECTIONS:
0. c7t512d0 <ATA    -WDC WD2500AAKS–0953 cyl 30398 alt 2 hd 255 sec 63>
/pci@0,0/pci8086,3a42@1c,1/pci1458,b000@0/disk@200,0
1. c7t513d0 <ATA-SAMSUNG HD103UI-0953-931.51GB>
/pci@0,0/pci8086,3a42@1c,1/pci1458,b000@0/disk@201,0
2. c9t0d0 <ATA-WDC WD6401AALS-0-3B01-596.17GB>
/pci@0,0/pci1458,b005@1f,2/disk@0,0
3. c9t1d0 <ATA-WDC WD6401AALS-0-3B01-596.17GB>
/pci@0,0/pci1458,b005@1f,2/disk@1,0
4. c9t2d0 <ATA-WDC WD20EARS-00M-AB51-1.82TB>
/pci@0,0/pci1458,b005@1f,2/disk@2,0
5. c9t3d0 <ATA-WDC WD20EARS-00M-AB51-1.82TB>
/pci@0,0/pci1458,b005@1f,2/disk@3,0
6. c9t4d0 <ATA    -WDC WD20EARS-00-AB51 cyl 60798 alt 2 hd 255 sec 252>
/pci@0,0/pci1458,b005@1f,2/disk@4,0
7. c9t5d0 <ATA-WDC WD20EARS-00M-AB51-1.82TB>
/pci@0,0/pci1458,b005@1f,2/disk@5,0
Specify disk (enter its number):
^D

8. This was an interesting little annoyance. It seems that the zpool replace command would only work after a zpool status command was run. Running the replace without running the status first gives you the following.

#zpool replace mypool c7t513d0 c9t4d0
cannot replace c7t513d0 with c9t4d0: no such device in pool

So we know we need to run a status first then follow it with the replace command…

#zpool status mypool

pool: mypool
state: ONLINE
scan: scrub canceled on Sat Jan 15 20:56:30 2011
config:

NAME          STATE     READ WRITE CKSUM
mypool      ONLINE       0     0     0
raidz1-0    ONLINE       0     0     0
c7t513d0  ONLINE       0     0     0
c9t3d0    ONLINE       0     0     0
c9t2d0    ONLINE       0     0     0
c9t5d0    ONLINE       0     0     0

errors: No known data errors

#zpool replace mypool c7t513d0 c9t4d0

9. Run another status so you know what is going on

#zpool status mypool

pool: mypool
state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Sat Jan 15 21:19:26 2011
64.4M scanned out of 3.07T at 8.05M/s, 111h4m to go
15.5M resilvered, 0.00% done
config:

NAME             STATE     READ WRITE CKSUM
mypool         ONLINE       0     0     0
raidz1-0       ONLINE       0     0     0
replacing-0  ONLINE       0     0     0me
c7t513d0   ONLINE       0     0     0
c9t4d0     ONLINE       0     0     0  (resilvering)
c9t3d0       ONLINE       0     0     0
c9t2d0       ONLINE       0     0     0
c9t5d0       ONLINE       0     0     0

errors: No known data errors

10. When the process is complete I believe it is advisable to scrub the drives to ensure all is well. #zpool scrub mypool This will also take a while and you can check on the status of the scrub with #zpool status mypool

Notes:

  • When replacing a drive Zpool status will show long estimated times as the 111 hours in red above. The numbers kept increasing for at least 2 hours and actually made it up to 423 hours remaining, but after 2 to 3 hours data actually started moving and the estimates became much more realistic. This was true for each drive I replaced. I can tell you than to complete a 4 drive RaidZ1 array ~85% full took about 12 hours per drive.
  • One crazy note… My server shut down current connections and failed to open the console on the machine during the copy. It started to fail all connection attempts with out of memory errors… Not good! Maybe I should not have been running virtual machines while it was resilvering on another pool… Dunno, but it was definitely strange. The resilver succeeded, and the machine did let me in after a couple of hours. I did realize that after installing Oracle Solaris 11 Express I had forgotten to limit ZFS ARC Cache (which I had done before: good reference here ZFS_Evil_Tuning_Guide#Limiting_the_ARC_Cache. So before the last drive swap I set the ZFS ARC Cache limit to 7 gigs of memory via the following “set zfs:zfs_arc_max = 0x1C0000000”.

Warning:

  • Remember that in a RaidZ1 array any loss of 2 drives at one time will loose you the entire array! I know I’m paranoid, but I have lost raid 5 arrays this way in the past, so imagine the following: You are upgrading a multi-drive RaidZ1 array. If you did not precondition the drives (have them powered up and drive testing over a few days – most home users do not do this), you will have more than one drive in the array that has been spinning for less than 24 hours. My experience with drive failures is as follows.
    • If a drive does not make it past power on you are OK, you stop migration and get a different drive… no problem.
    • Next hurdle, the drives that fail within 48 hours, should still be a low percentage, but there will be some.
    • The final more insidious are the drives that go flaky and start loosing sectors, then fail. This usually takes a few weeks.

Since most of the failures are when drives are relatively new, the odds of having two new drives in an array fail at the same time are far greater than the odds of having two simultaneous failures in a seasoned array.So the average home user will probably get a rack of 4 new huge hard drives on their front porch and run to the server and start swapping out their array. Having all brand new drives in the array, the odds that two will fail in the next week are FAR greater than the odds that you will experience two simultaneous failures after the system has been spinning for a week and even less after a month.

  • Some strategies to consider as you expand your home ZFS RaidZ1 array:
  • Expand safely. Replace one drive a week with the newer drive, or alternately season drives in another system for a week before you start putting them into your production array.
  • As long as you have not replaced the last larger drive, each drive is still held to the size dictated by your original array. You can avoid having to have spares of the new larger drive size by keeping your old drives and swapping them back in the event of a failure (until the last drive is replaced and ZFS starts using the full size of the drives).
  • I VERY highly advise weekly scrubbing of the home array. Monitoring the ‘zpool status’ after scrubs is the easiest way I know of to identify a flaky drive that is loosing sectors. An easy way do a weekly scrub is by adding a shell script to your crontab as follows:
    • I have the following line for each of my pools in a shell file I call zfsmaint.sh “# zpool scrub <my zfs pool name>
    • ” You can add this to your crontab by:   “#crontab -e”
      and adding the following (of course replacing the <your home dir> and the zpool command above should be in a shell script called zfsmaint.sh in your homedir):
      0 23 * * 1 /export/home/<your home dir>/zfsmaint.sh
      If you are having problems with the VI editor, please go look up VI commands on the web.

General:

Excellent ZFS Reference: ZFS_Best_Practices_Guide

Future wishes…

When I started my ZFS array there was no RaidZ2 or RaidZ3 (double/triple redundancy), but now there is… Sun never built an upgrade path I really hope Oracle will see this as an issue and make an upgrade path available. At the trivial cost of another disk I would like to move to a 2 drive redundant array without having to build a whole extra array to move it through.

UPDATE:

I wanted to make this walk through for everyone out there as a compilation of all the individual blogs/guides I had to use to perform the task. After all was said and done. It did not work. Apparently Oracle broke the auto-expand in Solaris 11. I went through the steps of setting the pool auto-expand property and trying to force the pool to expand with the new ‘zpool online -e’ command. Nothing worked. So I ended up copying my data to another pool, creating a new RaidZ2 pool (that I wanted anyway) and copying the data back. This was done via the zfs send/recv function via SSH to another server. After playing around the command line to do this is as follows:

Create a snapshot in your local machine via

zfs snapshot <mypool>/<filesystem>@<shapshot>

so

# zfs shapshot tank/myshare@today

Then my destination backup server was at 192.168.1.67. I created a pool in it called tank2. zfs automatically copied the snapshot and created a myshare filesystem and snapshot in tank2.

zfs send <source_pool>/<source_filesystem>@<snapshot> | ssh <account>@<server_ip> pfexec /sbin/zfs recv <dest_pool>/<dest_filesystem>@<dest_snapshot>

or

# zfs send tank/myshare@today | ssh myaccount@192.168.1.67 pfexec /sbin/zfs recv tank2/myshare@today

After I copied all the filesystems to the new server I did a scrub on the new server to ensure the drives/data were good. Then destroyed the pool on the original server. Created the new pool (which now filled up the drives) and copied everything back. From the console on the new server and the IP address of the oldserver is 192.168.1.68

# zfs send tank2/myshare@today | ssh myaccount@192.168.1.67 pfexec /sbin/zfs recv tank/myshare@today

When it is all moved, scrub the pool and bob’s your uncle… 🙂