Napp-it a free NAS web gui for Nexenta or eon/opensolaris

April 23rd, 2010 by EvilT

This looks very promising. Nexenta has long been an excellent OpenSolaris based distro that gave the user community a ZFS base filesystem with apt-get functionality all in a web gui configured NAS box package. They had released the core of this OS without the configuration programs as Nexenta Core…

This is the first how-to and implementation I’ve seen for taking the Nexenta Core package and turning it into a turnkey NAS box with all the ZFS, CIFS, and iScsi goodness that Nexenta Core has to offer. I’m thinking of cranking up one as a virtual machine over the weekend… :)

Link to site (English and German)

Installation instructions here: Napp-it installation instructions

Screens here: Napp-it ScreenShots

Web based configuration of a Opensolaris or Nexenta server as a NAS system.
Includes:

  1. Base system with root ssh-access via
    putty, winscp and midnight commander file browser
  2. smb fileserver for
    Mac/ Win workgroups and windows domains
    (OpenSolaris cifs)
  3. nfs and iSCSI san and iSCSI strorage for Apple’s
    Time-Machine
    (OpenSolaris comstar)
  4. backupserver

New features abilities for this version

  • ZFS raid (through raidz3) triple redundancy and automated deduplication
  • CIFS share with full share level Access Control List configuration
  • Comstar iScsi
  • Crossbow

Cool short film. Octopus steals a diver’s camera

April 19th, 2010 by EvilT

octopus steals my video camera and swims off with it (while it’s Recording)

Spiegel has the most excellent summation on the state of Global Warming…

April 18th, 2010 by EvilT

Everyone on either side of the fence should check out this article on the state of climate science. It gets really  good about 8 paragraphs into the article. I am very impressed that they sited both Landsea and Pielke in the Hurricane debate… :)

The Truth in Caller ID Act of 2010 makes Caller ID spoofing a crime!!!

April 15th, 2010 by EvilT

Wooo frikkin Hoo!!!!
http://www.engadget.com/2010/04/15/truth-in-caller-id-act-of-2010-makes-caller-id-spoofing-a-crime/

Open Source games I want to try…

April 6th, 2010 by EvilT

Scorched 3D  I cannot tell you how many hours me and my friends spent playing scorched. I really want to see the 3D version….

Open Transport Tycoon Deluxe  clone of the old Microprose trading game…

The Finger Test to Check the Doneness of Meat

April 4th, 2010 by EvilT

If you do not have a temp probe handy here is a quick way to assess how done your steak is. When you press on the top of the steak the meat will give a little. The amount it gives will tell you how done the steak is. You can use your own palm as a reference for this method.

Press your finger into the muscle just to the inside of the thumb on your palm. When your palm is open this should give about as much as a raw steak

Now touch the tip of your thumb to the tip of your first finger. When you press the same spot on your palm you will notice that the tension in the muscle makes the muscle more firm. When you are touching the tip of your first finger this approximates the firmness of a rare steak.

Now repeat with the second finger. This would be the equivalent of a medium rare steak.

Now the third finger is approximately medium.

The fourth and final finger is the stage I call ruined. Others may call it “well done”… :)

How to enable Deduplication in the ZFS filesystem on OpenSolaris

April 3rd, 2010 by EvilT

Works like a champ here!

http://blogs.sun.com/bonwick/entry/zfs_dedup

Article by Jeff Bonwick is reproduced below (Just to be sure I don’t loose it one day)… ;)

Monday Nov 02, 2009

ZFS Deduplication



You knew this day was coming: ZFS now has built-in deduplication.


If you already know what dedup is and why you want it, you can skip
the next couple of sections. For everyone else, let’s start with
a little background.

What is it?


Deduplication is the process of eliminating duplicate copies of data.
Dedup is generally either file-level, block-level, or byte-level.
Chunks of data — files, blocks, or byte ranges — are checksummed
using some hash function that uniquely identifies data with very high
probability. When using a secure hash like SHA256, the probability of a
hash collision is about 2^-256 = 10^-77 or, in more familiar notation,
0.00000000000000000000000000000000000000000000000000000000000000000000000000001.
For
reference, this is 50 orders of magnitude less likely than an
undetected,
uncorrected ECC memory error on the most reliable hardware you can buy.


Chunks of data are remembered in a table of some sort that maps the
data’s checksum to its storage location and reference count. When you
store another copy of existing data, instead of allocating new space
on disk, the dedup code just increments the reference count on the
existing data. When data is highly replicated, which is typical of
backup servers, virtual machine images, and source code repositories,
deduplication can reduce space consumption not just by percentages,
but by multiples.

What to dedup: Files, blocks, or bytes?


Data can be deduplicated at the level of files, blocks, or bytes.


File-level assigns a hash signature to an entire file. File-level
dedup has the lowest overhead when the natural granularity of data
duplication is whole files, but it also has significant limitations:
any change to any block in the file requires recomputing the checksum
of the whole file, which means that if even one block changes, any space
savings is lost because the two versions of the file are no longer
identical.
This is fine when the expected workload is something like JPEG or MPEG
files,
but is completely ineffective when managing things like virtual machine
images, which are mostly identical but differ in a few blocks.


Block-level dedup has somewhat higher overhead than file-level dedup
when
whole files are duplicated, but unlike file-level dedup, it handles
block-level
data such as virtual machine images extremely well. Most of a VM image
is
duplicated data — namely, a copy of the guest operating system — but
some
blocks are unique to each VM. With block-level dedup, only the blocks
that
are unique to each VM consume additional storage space. All other
blocks
are shared.


Byte-level dedup is in principle the most general, but it is also the
most
costly because the dedup code must compute ‘anchor points’ to determine
where the regions of duplicated vs. unique data begin and end.
Nevertheless, this approach is ideal for certain mail servers, in which
an
attachment may appear many times but not necessary be block-aligned in
each
user’s inbox. This type of deduplication is generally best left to the
application (e.g. Exchange server), because the application understands
the data it’s managing and can easily eliminate duplicates internally
rather than relying on the storage system to find them after the fact.


ZFS provides block-level deduplication because this is the finest
granularity that makes sense for a general-purpose storage system.
Block-level dedup also maps naturally to ZFS’s 256-bit block checksums,
which provide unique block signatures for all blocks in a storage pool
as long as the checksum function is cryptographically strong (e.g.
SHA256).

When to dedup: now or later?


In addition to the file/block/byte-level distinction described above,
deduplication can be either synchronous (aka real-time or in-line)
or asynchronous (aka batch or off-line). In synchronous dedup,
duplicates are eliminated as they appear. In asynchronous dedup,
duplicates are stored on disk and eliminated later (e.g. at night).
Asynchronous dedup is typically employed on storage systems that have
limited CPU power and/or limited multithreading to minimize the
impact on daytime performance. Given sufficient computing power,
synchronous dedup is preferable because it never wastes space
and never does needless disk writes of already-existing data.


ZFS deduplication is synchronous. ZFS assumes a highly multithreaded
operating system (Solaris) and a hardware environment in which CPU
cycles
(GHz times cores times sockets) are proliferating much faster than I/O.
This has been the general trend for the last twenty years, and the
underlying physics suggests that it will continue.

How do I use it?


Ah, finally, the part you’ve really been waiting for.


If you have a storage pool named ‘tank’ and you want to use dedup,
just type this:


zfs set dedup=on tank


That’s it.


Like all zfs properties, the ‘dedup’ property follows the usual rules
for ZFS dataset property inheritance. Thus, even though deduplication
has pool-wide scope, you can opt in or opt out on a per-dataset basis.

What are the tradeoffs?


It all depends on your data.


If your data doesn’t contain any duplicates, enabling dedup will add
overhead (a more CPU-intensive checksum and on-disk dedup table entries)
without providing any benefit. If your data does contain duplicates,
enabling dedup will both save space and increase performance. The
space savings are obvious; the performance improvement is due to the
elimination of disk writes when storing duplicate data, plus the
reduced memory footprint due to many applications sharing the same
pages of memory.


Most storage environments contain a mix of data that is mostly unique
and data that is mostly replicated. ZFS deduplication is per-dataset,
which means you can selectively enable dedup only where it is likely
to help. For example, suppose you have a storage pool containing
home directories, virtual machine images, and source code repositories.
You might choose to enable dedup follows:


zfs set dedup=off tank/home


zfs set dedup=on tank/vm


zfs set dedup=on tank/src

Trust or verify?


If you accept the mathematical claim that a secure hash like SHA256 has
only a 2^-256 probability of producing the same output given two
different
inputs, then it is reasonable to assume that when two blocks have the
same checksum, they are in fact the same block. You can trust the hash.
An enormous amount of the world’s commerce operates on this assumption,
including your daily credit card transactions. However, if this makes
you uneasy, that’s OK: ZFS provies a ‘verify’ option that performs
a full comparison of every incoming block with any alleged duplicate to
ensure that they really are the same, and ZFS resolves the conflict if
not.
To enable this variant of dedup, just specify ‘verify’ instead of ‘on’:


zfs set dedup=verify tank

Selecting a checksum


Given the ability to detect hash collisions as described above, it is
possible to use much weaker (but faster) hash functions in combination
with the ‘verify’ option to provide faster dedup. ZFS offers this
option for the fletcher4 checksum, which is quite fast:


zfs set dedup=fletcher4,verify tank


The tradeoff is that unlike SHA256, fletcher4 is not a pseudo-random
hash function, and therefore cannot be trusted not to collide. It is
therefore only suitable for dedup when combined with the ‘verify’
option,
which detects and resolves hash collisions. On systems with a very high
data ingest rate of largely duplicate data, this may provide better
overall performance than a secure hash without collision verification.


Unfortunately, because there are so many variables that affect
performance,
I cannot offer any absolute guidance on which is better. However, if
you are willing to make the investment to experiment with different
checksum/verify options on your data, the payoff may be substantial.
Otherwise, just stick with the default provided by setting dedup=on;
it’s cryptograhically strong and it’s still pretty fast.

Scalability and performance


Most dedup solutions only work on a limited amount of data — a handful
of terabytes — because they require their dedup tables to be resident
in memory.


ZFS places no restrictions on your ability to dedup. You can dedup
a petabyte if you’re so inclined. The performace of ZFS dedup will
follow the obvious trajectory: it will be fastest when the DDTs
(dedup tables) fit in memory, a little slower when they spill over
into the L2ARC, and much slower when they have to be read from disk.
The topic of dedup performance could easily fill many blog entries —
and
it will over time — but the point I want to emphasize here is that
there
are no limits in ZFS dedup. ZFS dedup scales to any capacity on any
platform, even a laptop; it just goes faster as you give it more
hardware.

Acknowledgements


Bill Moore and I developed the first dedup prototype in two very intense
days in December 2008. Mark Maybee and Matt Ahrens helped us navigate
the interactions of this mostly-SPA code change with the ARC and DMU.
Our initial prototype was quite primitive: it didn’t support gang
blocks,
ditto blocks, out-of-space, and various other real-world conditions.
However, it confirmed that the basic approach we’d been planning for
several years was sound: namely, to use the 256-bit block checksums
in ZFS as hash signatures for dedup.


Over the next several months Bill and I tag-teamed the work so that
at least one of us could make forward progress while the other dealt
with some random interrupt of the day.


As we approached the end game, Matt Ahrens and Adam Leventhal developed
several optimizations for the ZAP to minimize DDT space consumption both
on disk and in memory, key factors in dedup performance. George Wilson
stepped in to help with, well, just about everything, as he always does.


For final code review George and I flew to Colorado where many folks
generously lent their time and expertise: Mark Maybee, Neil Perrin,
Lori Alt, Eric Taylor, and Tim Haley.


Our test team, led by Robin Guo, pounded on the code and made a couple
of great finds — which were actually latent bugs exposed by some new,
tighter ASSERTs in the dedup code.


My family (Cathy, Andrew, David, and Galen) demonstrated enormous
patience as the project became all-consuming for the last few months.
On more than one occasion one of the kids has asked whether we can do
something and then immediately followed their own question with,
“Let me guess: after dedup is done.”


Well, kids, dedup is done. We’re going to have some fun now.

Neat brain stuff!

March 29th, 2010 by EvilT

Toyota Emergency Stop Procedure

March 22nd, 2010 by EvilT

http://www.toyota.com/recall/videos/stoppingprocedure.html

Funny thing… This procedure was taught in my high school driver’s education class… 30 years ago…

An excellent breakdown of Science vs. Snake Oil in Health Supplements…

March 6th, 2010 by EvilT

Tweaking the Mac Mini Gama for accurate color on HDTV when using DVI-HDMI adapter

March 5th, 2010 by EvilT

As one reader noted in this article on xlr8yourmac.com, the muddy blacks you get when connecting a Mac Mini to a HDTV via DVI-HDMI adapter are caused by the difference in gamma space between PC and TV (0-255 and 16-235 respectively).

You could use the Gamma control on the Mac Mini to dial up the black to 16 and the white down to 235. As the author discovered in the Plex forum, a freeware software product called  Gamma Control has created a preference item to solve this problem more simply in his newer product, Black Light: http://michelf.com.nyud.net/projects/black-light/

Howto upgrade/replace your OpenSolaris ZFS CIFS/SMB server hard drive

March 4th, 2010 by EvilT

Here are the step by step instructions I built to rebuild with a fresh install of OpenSolaris on a new hard drive and bring back my ZFS arrays…

1. Export Zpool on old machine
#zpool export -f <pool name>
1.5 make sure to copy or move current shell scripts from <user home> and root dirs. Make sure you copy the current crontab, to be safe copy group and passwd files.
2. Install Opensolaris on new HD
3. install SMB server components
SUNWsmbs, SUNWsmbskr
4. edit pam.conf
#sudo gedit /etc/pam.conf
add the following line to the end of the pam.conf
other password required pam_smb_passwd.so.1 nowarn
5. check the status of the cifs server
# svcs smb/server
6. turn the cifs server on
# svcadm enable -r smb/server
7. Join the workgroup
#smbadm join -w <workgroupname>
8. Add all the users to the system, reset all passwords.
9. Import the zpool
zpool import <pool name>
10. Check the shares
# sharemgr show -vp
11. Reset the perms and you are good to go….
12. Download VirtualBox and install on main machine (If you copied all the VM info in the user homedir and you install as the user then VirtualBox should be the same when you are done).

Great howto for writing a research paper in one day or less…

January 31st, 2010 by EvilT

From the good people at Nerd Paradise

Quoted here only to archive the work all work done by Blake on Nerd Paradise at the link above…

How to Write a 20 Page Research Paper in Under a Day

Post by Blake

So you’ve procrastinated again. You told yourself you wouldn’t do this 2 months ago when your professor assigned you this. But you procrastinated anyway. Shame on you. It’s due in a few hours. What are you going to do?

Pick a Topic

  • The more “legally-oriented” your topic is, the better. You’ll see why.
  • It has to be something you feel strongly about. Strong as in it makes you want to open your window and yell and shake your fist about it at joggers passing by. That strong.
  • It also has to be something that you already know some stuff about.
  • It also needs to have some depth to it. It can’t be like “We should have free pizza in lecture every Friday”. That’s lame. -Unless you’re really creative, then that could possibly work if your professor has a sense of humor and you really can write 20 pages about something silly like that.

Make a list

  • …of every possible outcome that this issue could cause in
    • …the near future
    • …the far future
  • …of every person that this topic affects.
  • …of any instances where this topic has come in the news.
  • …what you would do about this topic if you had the chance/power/enough-sugar
  • …any little detail you can think of

The important thing about this is to think of ABSOLUTELY EVERYTHING, no matter how silly or far-fetched. It’ll make your professor go “hmm, didn’t think about that one”. You can even get your friends to help you with this one. The more the merrier. It’s best to do this on a computer, because…

Reorder everything

Put your most obvious argument first.

Then put weird off the wall stuff, regardless of importance.

Put the strongest argument for your case next.

Now list the incidents that will help argue for your point. Don’t know of any incidents in the news to help argue your point? That’s ok. Make up some, except keep it really really generic. When it comes time to quote the source, remember this: There are over 6 billion people in the world. There are countless newspapers and other sources that document people doing…stuff. If you list incidents that are generic enough and your topic isn’t extremely weird, at least one person out there has done something notable/stupid/crazy enough to make it to the news. Also, people have sued each other over everything imaginable. Find a court case database. Your topic has SOMEHOW manifested itself in court at some point in history. I can almost guarantee it. Just make sure that the situations you come up with are physically possible.

Now, list everything that could be construed to be the answer to the question “if elected, what would you do about this issue?”

It’s best to keep all this in the form of an outline.

Spaces

Now add several lines of space under each bullet. Keep adding spaces until your text document has reached the goal size of your paper.

Now print it out.

Get the hell away from your computer

  • I’m serious.
  • No really, get away from the computer.
  • Go outside and sit under a tree. If you hate outside, or if it’s too cold for humans to survive, or if there’s a band of rabid dogs roaming your neighborhood, good. It’ll help you write faster.

The reason why you should do this is because everyone magically becomes ADD when they are near a computer. You can check your AIM messages later.

Write

Write a fiery rant in each of the spaces you alloted. Get pumped. Just don’t begin every paragraph with “I swear upon my father’s grave…” Also try not to repeat yourself too much. Be very specific. Talk to your reader as though they’ve never heard of your subject before. Write at about the same size that your typed version will be. Don’t worry too much if you don’t fill in all the spaces. But if you feel strongly enough about your topic, then this really shouldn’t be a problem. If you’re like me and can’t think linearly you can skip around as much as you want.

Go Back Inside

Type everything. You’ll also notice more things occur to you as you type. Go ahead and throw them in in the corresponding categories. Don’t jump around too much at this point though. Maintain focus and bash out that essay as fast as possible. Although you should do this as fast as possible, be a typo nazi. Those little things really make it evident you did this at the last minute.

Time for that whole “research” part

Believe it or not, nothing you said was original. Remember what I said earlier about 6 billion people? Apply now. Pick each topic/case/scenario/subpoint. Anything you had to say about those has already been said by some scholar or professor or newspaper. Google it up. It won’t take long. Take a few key words from your main argument of each section and see what you get. Paraphrase their main argument or quote a few lines. Add the proper citations. Do NOT plagiarize.

Formatting

  • Some word processors are capable of non-integer spacing. Try 2.1 or 2.2 spacing.
  • There’s also the Good ol’ Margin trick
  • Title page
  • Did your professor specify to use MLA citations? She/he didn’t? Good. APA citation guidelines are much more friendly with website sources. Check it out.

Print.

Turn in.

Good job. Have a cookie.

Decrypting those annoying OSX key sequences.

January 27th, 2010 by EvilT

For those of you who came from the PC and wondered what the heck the flower and the thing that looks kinda like a backslash were… ;)

One of the best bicycle information sites available.

January 23rd, 2010 by EvilT

Sheldon Brown-Bicycle Technical Information

FREE online Computer Science courses!

January 14th, 2010 by EvilT

From the good people at supergeekland

FREE online Computer Science courses!

Howto reset a VirtualBox VM

January 2nd, 2010 by EvilT

Really simple:

VBoxManage will give you a list of currently running vm’s

#VBoxManage list runningvms

Then use VBoxManage from the command line as described below:

#VBoxManage controlvm <vmname> pause|resume|reset|poweroff|savestate

get the complete list of options with: #VBoxManage controlvm

A Lightly lifted map of common computer ports and componenets. From the good folks at Geekologie

January 2nd, 2010 by EvilT

Geekologie

computer-hardware-2

SWEET!!!! Para-virtual network drivers for Windows and Linux in VirtualBox!

January 1st, 2010 by EvilT

These drivers are stable and much faster than having VirtualBox emulate the network card. You need V3.1.X virtualbox to use the new adapter.

The adapter works for Windows 2k, xp, and probably 2003…

The HowTo

Tip: How to setup Windows guest paravirtual network drivers | KVM – The Linux Kernel-Based Virtual Machine

Sourceforge site…

http://sourceforge.net/projects/kvm/files/

Icovia® Space Planner – Room planning site

November 16th, 2009 by EvilT

Excellent… Fast… If you want to plan where to put everything in your room check out this site.

Icovia® Space Planner