Archive for the 'Reference' Category

Rice Cooker Gluten Free pancake and the Zojirushi 5.5 Cup Neuro Fuzzy

Sunday, November 22nd, 2015

OK, I will keep updating this post as I am still experimenting.

I found numerous articles on the web showing ways to make a loaf of bread from pancake mix in a rice cooker (here is one). Some articles said that it was a piece of cake (pardon the pun), while others claimed that the product was unusable. None of the articles I found mentioned the type of rice cooker I had (Zojirushi Neuro Fuzzy) or had any advise on making the recipe gluten free. I can tell you now that I have had good success, and to help anyone else out there here is a running log of my observations.

IMG_0765IMG_0766IMG_0767

  • So far I have used the porridge setting on everything I have tried. I have not tried other settings, but probably will later.
  • Only make a single batch. The cooker will not get the center done on a double batch
  • I let the rice cooker run to the end of the cycle and there has been no problem with having to restart or having the loaf burn.
  • For some unknown reason I always add the quantity of oil called for in the waffle recipe, but use the ingredients from the pancake recipe.
  • To extract the loaf I remove the rice cooker bowl from the machine and put a plate on top of the bowl. Then I invert the bowl and the loaf is unceremoniously dumped onto the plate. I’ve never had one crack or break as a result of this.
  • Bob’s Red Mill pancake mix makes a good dark loaf that seems to have a texture and color similar to Soda Bread. I think with a little tweaking there is a good opportunity to make a reasonable facsimile of Irish Soda Bread that is Gluten free
  • Gluten Free Bisquick makes a more moist slightly rubbery loaf, but it is still very much edible and will absorb syrup fairly readily. When you open the rice cooker the top of the loaf looks white and rubbery, but if you touch it, it is cooked through. I’ve let it sit for up to 10 minutes on keep warm and it does not burn the bottom.
  • Hodgson Mill gluten free pancake mix holds a most dubious honor. It is the worst tasting mix I have ever encountered. It made a rubbery loaf that smelled and looked bad. I bought it because I needed mix that day and the store did not have Bob’s or Bisquick. Trust me it was not an acceptable substitute.
  • Kodiak Cakes – gluten free frontier flapjack mix – these were very tasty, but came out very doughy. Also, these are oat based,  if you have gluten issues you may also have issues with oats, so be careful.
  • Pamela’s was pretty good, but you have to be careful with this one if you have tree nut allergies. They include almond flour in their mix.

IMG_0764

  • King Arthur gluten free pancake mix – This one was wonderful. The whole kitchen smelled like fresh baking. Taste was excellent. Note that the instructions on the box I had were for a double recipe (used the whole bag). I halved the recipe and it made a standard size rice cooker pancake. As noted above if I had made the full recipe it would have been mostly uncooked.
  • Glutino – Gluten Free Pantry Instant Baking Mix. This one comes in a little single serving bottle. I do NOT recommend trying the bottle. I did and it did not mix and basically made sludge in the bottom that I gave up on trying to get dissolved into the batter. Can’t tell you how it tastes, and I have no plans to buy another one to try…

IMG_1151

  • Hungry jack funfetti gluten free pancakes. This mix tastes just like cake. I would probably like it more without the funfetti, but it wasn’t bad. The flavor of the cakes was great! I really liked it… That said I did not follow the instructions on mixing. It is an add water only mix, but the waffle instructions said to add egg and oil. So I decided to follow the base instructions that are used by most of the other mixes. I used 1.25 cups of the dry mix, one large egg, one cup of milk, and a couple of tablespoons of vegetable oil. The texture was as close to a super moist cake of anything I’ve eaten in the gluten free genre.

 

 

Trader Joe’s mixes are pretty darned good. The pumpkin mix was only available around October, but it was awesome! Once again I don’t follow the recipe on the box. I use a cup of mix, one egg, two table spoons of butter (of course it is better than vegetable oil), then a cup of milk. If the mix is thick I add more milk to thin it up… It seems to work on just about every mix I’ve used so far

 

I picked up a tiger brand Induction rice cooker to replace the Zojirushi. I’m still getting used to it. It seems to be more finicky than the Zojirushi. The only setting that consistantly makes good pancakes is “plain”. Of course like almost every other rice cooker out there, there is almost no definition of what the settings are, but I have not found a better one yet.

 

To rate the mixes I’ve used so far:

Best: Bisquick, King Arthur, Hungry Jack, Trader Joe’s Gluten Free, Trader Joe’s Gluten free Pumpkin

Great with exceptions: If you don’t have issues with oatmeal, Kodiak was very good. If you don’t have issues with tree nuts Pamela’s is very good.

Pretty good: Bob’s Red mill

Avoid: Glutino, Hogsdon Mill

Quick notes on expanding a ZFS RaidZ Pool – Solaris 11 Express. Howto (see bottom for update)

Sunday, January 16th, 2011

So you have what was once a gargantuan ZFS RaidZ1 array, but the family videos, pictures, plus the super cool time windowed (via snapshot) backup method you have created for all your local machines have stuffed up the pool completely. Like me you view just dumping another pair of mirrored drives into the pool to be a hokey kluge that will create dissimilar infrastructure you will have to remember for years (in the event of a failure). Like me you have also heard that you can replace your drives one at a time with larger drives and with the successful replacement of the last drive the array will magically expand in size.

The long/short of my migration:

Whenever you turn your system on ZFS will automatically find your array drives wherever they are and form the array on boot-up. For my migration I bought an external eSata dock (one of the ones where you pop the drive in the top).

For each drive replacement I followed the following procedure.

1. Pop open a shell, become root. (I modded my permissions so pfexec works for me. I show how to do this in another post here on the blog. You can SU if you like)  $pfexec bash will give you a root shell. Get a status of my pool and make note of the device names (shown in bold).

#zpool status

NAME        STATE     READ WRITE CKSUM
mypool    ONLINE       0     0     0
raidz1-0  ONLINE       0     0     0
c9t4d0
ONLINE       0     0     0
c9t3d0
ONLINE       0     0     0
c9t2d0
ONLINE       0     0     0
c9t5d0
ONLINE       0     0     0

2. Shut down the machine.

3. Remove the drive I plan to replace from it’s current location (bay, sata, power, et al)

4. Place that drive into the eSata dock

5. Put the new larger drive in the place of the old drive.

6. ZFS worked out where the old drive was on boot up.

7. Become root, look at the devices in the system with the format command (note the ctrl-d will get you out of the format command). As you see one of the devices that was in my zpool before I swapped drives is now one of the new 2tb drives I’m putting into the pool. From running the format command before I put a drive into the eSata dock I know that any drive in the dock will be c7t513d0, but you could have run before and after format commands to look for the changes. Do be careful and make sure you know where your old and new drives are before the next step though…

#format

Searching for disks…done

AVAILABLE DISK SELECTIONS:
0. c7t512d0 <ATA    -WDC WD2500AAKS–0953 cyl 30398 alt 2 hd 255 sec 63>
/pci@0,0/pci8086,3a42@1c,1/pci1458,b000@0/disk@200,0
1. c7t513d0 <ATA-SAMSUNG HD103UI-0953-931.51GB>
/pci@0,0/pci8086,3a42@1c,1/pci1458,b000@0/disk@201,0
2. c9t0d0 <ATA-WDC WD6401AALS-0-3B01-596.17GB>
/pci@0,0/pci1458,b005@1f,2/disk@0,0
3. c9t1d0 <ATA-WDC WD6401AALS-0-3B01-596.17GB>
/pci@0,0/pci1458,b005@1f,2/disk@1,0
4. c9t2d0 <ATA-WDC WD20EARS-00M-AB51-1.82TB>
/pci@0,0/pci1458,b005@1f,2/disk@2,0
5. c9t3d0 <ATA-WDC WD20EARS-00M-AB51-1.82TB>
/pci@0,0/pci1458,b005@1f,2/disk@3,0
6. c9t4d0 <ATA    -WDC WD20EARS-00-AB51 cyl 60798 alt 2 hd 255 sec 252>
/pci@0,0/pci1458,b005@1f,2/disk@4,0
7. c9t5d0 <ATA-WDC WD20EARS-00M-AB51-1.82TB>
/pci@0,0/pci1458,b005@1f,2/disk@5,0
Specify disk (enter its number):
^D

8. This was an interesting little annoyance. It seems that the zpool replace command would only work after a zpool status command was run. Running the replace without running the status first gives you the following.

#zpool replace mypool c7t513d0 c9t4d0
cannot replace c7t513d0 with c9t4d0: no such device in pool

So we know we need to run a status first then follow it with the replace command…

#zpool status mypool

pool: mypool
state: ONLINE
scan: scrub canceled on Sat Jan 15 20:56:30 2011
config:

NAME          STATE     READ WRITE CKSUM
mypool      ONLINE       0     0     0
raidz1-0    ONLINE       0     0     0
c7t513d0  ONLINE       0     0     0
c9t3d0    ONLINE       0     0     0
c9t2d0    ONLINE       0     0     0
c9t5d0    ONLINE       0     0     0

errors: No known data errors

#zpool replace mypool c7t513d0 c9t4d0

9. Run another status so you know what is going on

#zpool status mypool

pool: mypool
state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Sat Jan 15 21:19:26 2011
64.4M scanned out of 3.07T at 8.05M/s, 111h4m to go
15.5M resilvered, 0.00% done
config:

NAME             STATE     READ WRITE CKSUM
mypool         ONLINE       0     0     0
raidz1-0       ONLINE       0     0     0
replacing-0  ONLINE       0     0     0me
c7t513d0   ONLINE       0     0     0
c9t4d0     ONLINE       0     0     0  (resilvering)
c9t3d0       ONLINE       0     0     0
c9t2d0       ONLINE       0     0     0
c9t5d0       ONLINE       0     0     0

errors: No known data errors

10. When the process is complete I believe it is advisable to scrub the drives to ensure all is well. #zpool scrub mypool This will also take a while and you can check on the status of the scrub with #zpool status mypool

Notes:

  • When replacing a drive Zpool status will show long estimated times as the 111 hours in red above. The numbers kept increasing for at least 2 hours and actually made it up to 423 hours remaining, but after 2 to 3 hours data actually started moving and the estimates became much more realistic. This was true for each drive I replaced. I can tell you than to complete a 4 drive RaidZ1 array ~85% full took about 12 hours per drive.
  • One crazy note… My server shut down current connections and failed to open the console on the machine during the copy. It started to fail all connection attempts with out of memory errors… Not good! Maybe I should not have been running virtual machines while it was resilvering on another pool… Dunno, but it was definitely strange. The resilver succeeded, and the machine did let me in after a couple of hours. I did realize that after installing Oracle Solaris 11 Express I had forgotten to limit ZFS ARC Cache (which I had done before: good reference here ZFS_Evil_Tuning_Guide#Limiting_the_ARC_Cache. So before the last drive swap I set the ZFS ARC Cache limit to 7 gigs of memory via the following “set zfs:zfs_arc_max = 0x1C0000000”.

Warning:

  • Remember that in a RaidZ1 array any loss of 2 drives at one time will loose you the entire array! I know I’m paranoid, but I have lost raid 5 arrays this way in the past, so imagine the following: You are upgrading a multi-drive RaidZ1 array. If you did not precondition the drives (have them powered up and drive testing over a few days – most home users do not do this), you will have more than one drive in the array that has been spinning for less than 24 hours. My experience with drive failures is as follows.
    • If a drive does not make it past power on you are OK, you stop migration and get a different drive… no problem.
    • Next hurdle, the drives that fail within 48 hours, should still be a low percentage, but there will be some.
    • The final more insidious are the drives that go flaky and start loosing sectors, then fail. This usually takes a few weeks.

Since most of the failures are when drives are relatively new, the odds of having two new drives in an array fail at the same time are far greater than the odds of having two simultaneous failures in a seasoned array.So the average home user will probably get a rack of 4 new huge hard drives on their front porch and run to the server and start swapping out their array. Having all brand new drives in the array, the odds that two will fail in the next week are FAR greater than the odds that you will experience two simultaneous failures after the system has been spinning for a week and even less after a month.

  • Some strategies to consider as you expand your home ZFS RaidZ1 array:
  • Expand safely. Replace one drive a week with the newer drive, or alternately season drives in another system for a week before you start putting them into your production array.
  • As long as you have not replaced the last larger drive, each drive is still held to the size dictated by your original array. You can avoid having to have spares of the new larger drive size by keeping your old drives and swapping them back in the event of a failure (until the last drive is replaced and ZFS starts using the full size of the drives).
  • I VERY highly advise weekly scrubbing of the home array. Monitoring the ‘zpool status’ after scrubs is the easiest way I know of to identify a flaky drive that is loosing sectors. An easy way do a weekly scrub is by adding a shell script to your crontab as follows:
    • I have the following line for each of my pools in a shell file I call zfsmaint.sh “# zpool scrub <my zfs pool name>
    • ” You can add this to your crontab by:   “#crontab -e”
      and adding the following (of course replacing the <your home dir> and the zpool command above should be in a shell script called zfsmaint.sh in your homedir):
      0 23 * * 1 /export/home/<your home dir>/zfsmaint.sh
      If you are having problems with the VI editor, please go look up VI commands on the web.

General:

Excellent ZFS Reference: ZFS_Best_Practices_Guide

Future wishes…

When I started my ZFS array there was no RaidZ2 or RaidZ3 (double/triple redundancy), but now there is… Sun never built an upgrade path I really hope Oracle will see this as an issue and make an upgrade path available. At the trivial cost of another disk I would like to move to a 2 drive redundant array without having to build a whole extra array to move it through.

UPDATE:

I wanted to make this walk through for everyone out there as a compilation of all the individual blogs/guides I had to use to perform the task. After all was said and done. It did not work. Apparently Oracle broke the auto-expand in Solaris 11. I went through the steps of setting the pool auto-expand property and trying to force the pool to expand with the new ‘zpool online -e’ command. Nothing worked. So I ended up copying my data to another pool, creating a new RaidZ2 pool (that I wanted anyway) and copying the data back. This was done via the zfs send/recv function via SSH to another server. After playing around the command line to do this is as follows:

Create a snapshot in your local machine via

zfs snapshot <mypool>/<filesystem>@<shapshot>

so

# zfs shapshot tank/myshare@today

Then my destination backup server was at 192.168.1.67. I created a pool in it called tank2. zfs automatically copied the snapshot and created a myshare filesystem and snapshot in tank2.

zfs send <source_pool>/<source_filesystem>@<snapshot> | ssh <account>@<server_ip> pfexec /sbin/zfs recv <dest_pool>/<dest_filesystem>@<dest_snapshot>

or

# zfs send tank/myshare@today | ssh myaccount@192.168.1.67 pfexec /sbin/zfs recv tank2/myshare@today

After I copied all the filesystems to the new server I did a scrub on the new server to ensure the drives/data were good. Then destroyed the pool on the original server. Created the new pool (which now filled up the drives) and copied everything back. From the console on the new server and the IP address of the oldserver is 192.168.1.68

# zfs send tank2/myshare@today | ssh myaccount@192.168.1.67 pfexec /sbin/zfs recv tank/myshare@today

When it is all moved, scrub the pool and bob’s your uncle… 🙂

Cheap Mac Mini 802.11n wireless upgrade – Macbook too…

Saturday, November 22nd, 2008

Ok, after much wailing and gnashing of teeth. I finally have a set of products that allows you to upgrade your Intel Mac Mini (I have the core 2 duo 1.8ghz) to a reliable wireless n solution.

So I have set up a home network with an Airport Extreme base station and to connect other computers via wireless I Have extended the network with two Airport Express modules. My understanding is that if you have a wireless N network that allows wireless G connections, the instant a G connection is made the entire network is slowed to wireless G speed. Since I use the Mac Mini as a media player (via the most excellent media player package in the entire world – PLEX),  I really need the wireless network to run at full N speed. To that goal I set the Airports to run Wireless N at 5 ghz only (I set up a cheap G router as a bridge for any wireless G people who visit the house). Since the only clients that can connect at 5 ghz are A and N I’m good to go.

First off, I’m not planning on a pictorial for Mac Mini disassembly as the people over at HardMac already have a nice one.

The Ordeal:

  • First off I tried adding one of the wireless-n USB sticks. This allowed me to connect with N but the wireless USB sticks only run 2.4Ghz. I was in search of the 5Ghz range so this was not an option
  • I ordered the Macbook Pro wireless upgrade card (part number Apple MA688Z/B – Airport Extreme Wireless Upgrade Kit – I picked it up from J&R Music online for $49) . Since the Mini only had one antenna I installed the card with only one antenna connected (On the internet there were many reports that the card would work this way, but at slower speed). After installation the Mini did not recognize the card as 802.11 A/B/G/N (viewed via the Network Utility.app in the Applications/Utilities folder) even though I was using Leopard 10.5.5. So I installed the airport software that came with the Airports. After that the card showed up as a 802.11. A/B/G/N and all was well. The card hooked right up to the network and worked great – for a few minutes. The trouble was that it would drop the network connection every time I loaded the card up with traffic. Whenever it dropped the connection the card would be unable to connect to any network, until I rebooted the system then it would hook up again. Sometimes the card would run for days at a time if I did no more than surf the net a little, but every time I viewed video or played music from my NAS server the Mini would drop off the network.
  • I found a blog entry on the net saying that you could use the bluetooth antenna as the second antenna for the MA688Z/B card. Thinking that the second antenna might be an issue, and given that I was not using bluetooth in the Mini I gave that a go. The card displayed all the same issues as before.
  • I gave up and put the old card back in, thinking that I could at least hear my music and watch low bitrate video off the server, and I could just copy the high definition videos off the server to the mini whenever I wanted to watch them. No dice. After the drivers were upgraded for the Mini it would not work with the old card again (I didn’t try playing with drivers or configuration files after that since wireless G was not my original goal).
  • Since the network card in the Mini (and in most all notebook computers) is a standard Mini PCI card I started to look for non-apple alternatives. I did some research and found one blog entry where the poster said they had success installing a Gigabyte brand card in a Macbook Pro. I did some research and found the Gigabyte GN-W106N-RH. This card is based on the same (or similar) Atheros AR5008 chipset as the Apple MA688Z/B card. In the same research I found that antennas themselves are frequency rated (2.4ghz, 5ghz, or both), which makes sense, but I had not thought about the antennas in the Mini not being optimized or even capable of 5ghz (even though the Mini would not even run the MA688Z/B card at 2.4 g speed successfully). I went looking for a source for the antennas and found Oxfordtec.com.  Since they also had the best price for the Gigabyte mini PCI network card (59.95) I ordered both the card and 3 antennas ($8.95 each) from them (I actually ordered the version with the longer wire, but I think it’s a bit too long so I’m recommending the one above). I installed the Gigabyte card and plugged in the three antennas (I also tried every combination possible using the internal Mini’s antennas – bluetooth and the original network, but neither worked well with 5ghz wireless N). Since the Mini’s case is aluminum (which would block the antenna signal) and I didn’t want to futz about and mess up the cooling flow through the vents in the bottom of the mini, I routed the antennas out the back of the mini through the lower corner hole for the fan vent (the vent hole farthest away from the power button). I turned on the Mini and the card hooked right up. The drivers for the Apple card worked wonderful for the Gigabyte card. Network Utility shows a solid 300 megabit connection and the card runs solid as a rock.

I also had a 2GHZ Macbook (Core 2 non duo) that I wanted to upgrade. That one was a breeze. I followed the directions from HardMac for the upgrade with the Apple card, since I had a spare one from the Mini debacle… 😉  Once again, even though the posters say Leopard had the N drivers built in (maybe if I did a clean install Leopard might have put in the driver for the N card) I still had to run the updater utility from the Airport cards. Since the Macbook had two antenna connections already, no issues with wondering if I needed another one. The Macbook hooked up 5ghz 802.11n and is stable as a rock.

Faking Out the Leopard Installer with Open Firmware

Thursday, May 22nd, 2008

To install Leopard on an “unsupported” G4 clocked under 867 MHz:

1. Reboot your Mac and hold down the Cmd-Opt-O-F keys until you get a white screen with black text. This is the Open Firmware prompt.

2. Insert the Mac OS X Leopard Install DVD.

3. Type the following lines exactly as shown below into the Open Firmware prompt. Be mindful of capitalization, spaces, zeros, etc. If the command is properly typed and understood, Open Firmware will display “ok” at the end of each line after you hit “return”. What these lines do is set the CPU speed reported by Open Firmware to OS X as an 867 MHz G4 processor system. They then continue the boot from the DVD drive.

For single CPUs, use the following three lines:

dev /cpus/PowerPC,G4@0
d# 867000000 encode-int " clock-frequency" property
boot cd:,\\:tbxi

For dual CPUs, use the following five lines:

dev /cpus/PowerPC,G4@0
d# 867000000 encode-int " clock-frequency" property
dev /cpus/PowerPC,G4@1
d# 867000000 encode-int " clock-frequency" property
boot cd:,\\:tbxi

4. Continue the install normally.

5. This CPU setting is only in effect until the Mac reboots. Once OS X Leopard is installed and your Mac has rebooted, the proper CPU speed should once again be displayed when you select About This Mac under the Apple menu.

Be sure to enter every line above exactly as written. The system will prompt you with an “ok” after each line if you have

The text above was lifted from the link below… 🙂

Faking Out the Leopard Installer with Open Firmware

Pew Research Center Study of Interest and, for me, disappointment

Tuesday, April 17th, 2007

Public Knowledge of Current Affairs Little Changed by News and Information Revolutions
What Americans Know: 1989-2007

This is a fascinating and potentially disturbing report. For your own interest, you can take the quiz they offered those who participated in the study. I took it a little while ago and found that myself in the 96th percentile as I was able to correctly answer every question. One may draw from a large range of conclusions regarding this report and what it reveals. I’ve no desire to write that book as I’m lazy and would likely grow depressed doing so.

Dems Release Report Alleging White House Lawbreaking

Sunday, August 13th, 2006

You may find this of interest.

This may become worth referencing and remembering should the Democrats retake the US House of Representatives and/or US Senate.  I do not anticipate them achieving either of these, but it is nice to see that at least some of our elected officials are willing to investigate what appears to be wrong doing.