Notes:
Links in this article which refer to an off-site
page (such as a manufacturer) will open in a new browser tab or window
(depending on your web browser).
Each of the images is clickable to display a
higher-resolution version.
I am not particularly happy about the power supplies Ci Design used in the NSR316 (at least the version in the chassis I have). They run quite hot (up to 165° F) even after the ventilation modification I performed on one of the chassis as an experiment. They also don't seem to be particularly reliable - after powering off 4 RAIDzilla IIs for a week while my floors were being refinished, 3 power supplies (out of a total of 8) failed to turn on when power was re-applied and had to be replaced. They are also only 88% efficient (which probably contributes to the heat problem), while newer designs are up to 96% efficient. I contacted Ci Design, explaining that I had a number of NSR316 chassis, to ask if an upgraded power supply was available, and got back a response asking me for the serial number of the system. I interpreted that as a Derp! response (since I had purchased a quantity of these chassis and there is no "system" serial number other than the ones I assigned when I built them). I wrote back and explained this, but did not receive a response after waiting more than 2 weeks, so I decided to look elsewhere.
The main reason I selected the Ci Design NSR316 for the RAIDzilla II is the 3 LEDs per drive bay, which convey detailed status information about each individual drive. This is accomplished via a special drive backplane and custom EPCT firmware on the 3Ware 9650 controller I was using. The 9650 is now considered a "legacy" controller by Avago (the current owner of the 3Ware brand), although it is still available. The 9650 is a SATA-only controller - if I wanted the RAIDzilla 2.5 to support SAS drives, I would need to change the controller. This could either be a 3Ware 9750 (also "legacy") or something more modern such as a LSI-based controller. If I switched to a non-3Ware controller I would need to replace the 4 drive backplanes in the NSR316. Even after doing that, the best I would achieve would be 2 operable LEDs per drive bay. Given the response from Ci Design regarding power supplies, I didn't even try asking about backplanes!
If you ask pretty much anyone in the industry what server chassis they would use (if not purchasing a complete system from a company like Dell or HP Enterprise) the answer would almost always be Supermicro. Supermicro offers a bewildering number of server chassis. Even when restricting your search to a 3RU chassis with 16 horizontal hot-swap 3.5" drive bays, you still get to choose between 19 currently-available products! Once you wade through the product descriptions, you'll discover that the major cause of the large number of part numbers is a choice of 4 different backplane styles and a half dozen or so different power supply capacities. I wanted a power supply in the 800- to 1000-Watt range (the NSR316 used 820W power supplies) and a direct-wire multi-lane backplane. [The other choices for backplane are one with 16 individual drive cables and two with different types of SAS expander.] That led me to the CSE-836BA-R920B, which has redundant 920W 94% efficiency power supplies. These power supplies are also noted for ther low-noise operation (Supermicro rates them as "Super Quiet", and there is only one other SQ power supply in the Supermicro catalog). While noise is not an issue in my server room, the SQ rating indicated that they didn't need high-speed fans to stay cool, which was definitely a big selling point after my experience with the Ci Design power supplies.
While the 3Ware 9650 can be connected to the more modern Supermicro drive backplane, it requires a discontinued I2C-MUX-SM1 adapter to operate the fault / locate LEDs. As I wanted to upgrade the RAIDzilla II to support SAS drives as well as SATA, it made sense to purchase a new SAS disk controller which directly supported the Supermicro backplane instead. I chose the LSI SAS 9201-16i controller which has excellent driver support and which supports the Supermicro backplane fault / locate LEDs on the same cable that it uses for the drive data. This is a JBOD ("just a bunch of disks") controller, not a RAID controller, but I wasn't using the on-board RAID functionality on the 3Ware 9650, so this doesn't matter. FreeBSD recently added a utility, mpsutil, to display lots of useful information as well as perform tasks such as updating the firmware. This puts the 9201-16i controller (and related models) on an equal basis with the 3Ware 9000 family's tw_cli utility.
It is differences in the minor features which it make it possible to say "this case is better than that case" if you are looking for a particular feature. Sometimes one case will be a clear winner over the other, but most of the time a single feature isn't important enough by itself for the user to select a chassis based solely on that feature.
Some of the SC836 cables look like they could reach to an adjacent chassis! In order to make the cabling more manageable, most of these will need to be shortened to a more useful length. My initial concern was getting the power cables (24-pin main ATX and the two 8-pin EPS12V) down to a more manageable length, as otherwise I would not be able to use the airflow-directing shroud, at least not without stuffing cables into various gaps to move them out of the way. I'd gone through this process with the Ci Design chassis and by this point I decided I'd customize every cable rather than dealing with excess cable.
This photograph shows an intermediate state. The overly-long 2nd EPS12V cable and its two 4-pin connectors (the 2 * 4-pin instead of 1 * 8-pin is presumably to support non-server motherboards that only use a single 4-pin auxiliary 12V connector) has been cut back to match the length of the 1st EPS12V cable and had a normal 8-pin EPS12V shell installed. The main point of this picture is to show that the 24-pin ATX connector's cables have been cut back and new pins crimped onto each of the wires coming out of the power supply. I do not recommend you even think about trying this unless you have the tooling, pins, and connector bodies necessary to accomplish a major undertaking of this type. Also, if you think you might ever use a different motherboard, shortening the cables now means that you might need to purchase a replacement power distribution board in the future if you change the motherboard. Lastly, if you make a mistake with the pinning of the connector, you can let the "magic smoke" out of some very expensive components.
At this point the shortened 24-pin ATX cable wires have been inserted into a new 24-pin ATX connector shell. After carefully verifying that all signals look good (using a premium power supply tester) it is time to move on to some of the other cables. The PMBus cable has also been shortened as you will see in a subsequent section. Additionally, the front panel COM2 and USB cables have also been shortened.
One of the issues I'd run into when adding 10GbE to the first RAIDzilla II systems was that there is no connector for a front panel LED on the Intel X540-T1 10GbE card, only a LED integrated into the RJ45 jack. I contacted Intel about this and they confirmed that this was intentional. I'm not sure why that is - it seems to me to be an important feature to have and would not add more than a few cents to the cost of the card. I have two theories about why Intel did this. It could be one, the other, both, or neither:
I decided that I was comfortable with soldering a connector onto the Intel card in order to get the activity signal out to where I could actually use it. However, since this system is a Supermicro motherboard in a Supermicro chassis, the interconnection between them is a 16-pin cable (which is also way too long, as it turns out) instead of individual connectors as in the Ci Design chassis.
The first thing I did was to shorten the front panel cable (made somewhat difficult because inside protective sleeving, Supermicro slits the ribbon cable into 16 individual wires to make them fit more easily through the sleeving). After getting the 16 loose wires onto a new connector in the correct sequence and crimping that connector, the [now] excess cable was cut off past the new connector.
That got me a properly-sized front panel to motherboard cable, but didn't do anything to connect the 10GbE card to the front panel "LAN 1" LED. I decided to create a custom interposer cable. It is a 16-pin female to 16-pin male which serves as a very short (4" or so) extension cord between the motherboard's front panel connector and the actual front panel cable. Two of the 16 wires are much longer and will be routed through the chassis to where they plug into the connector on a modified 10GbE card. The interposer also corrects the orientation of the front panel cable on the motherboard - the stock cable actually exits toward the back of the system and then flips back over the connector and heads to the front panel.
This is the custom interposer cable. The long "tail" part of the cable is covered with protective mesh sleeving with heat-shrink tubing at each end. The cable ends in a pair of socket pins, also protected with heat-shrink tubing. These pins will plug onto a new connector on the X540 card, as shown in the following photo.
This picture shows the connector added to the back of the X540 board (on the right side of the board about 1/3 of the way up). This 2-pin header is soldered to the pads for the activity LED built into the RJ45 connector on the board. Therefore, this connector already has the signal modified by the current-limiting resistor on the X540 board. The SC836 control panel also has current-limiting resistors for the network activity LEDs, but the double resistance doesn't decrease the brightness of the LED appreciably.
This is a view of the completed system, looking straight down from the top. You can see that all of the cables are the exact length required after having been shortened or custom made, as needed. The processor and memory area is covered with a clear plastic shroud (air guide), so fans on the processor heat sinks are not entirely necessary. The processor fans provide a margin of safety in the event of a failure of one of the chassis fans. All 12 of the memory slots are now populated with 8GB DIMMs (in the previous RAIDzilla II, every alternate slot was empty). The processors under the heat sinks are now E5620s.
The expansion cards (in order from left to right) are:
This is the same system as the previous picture, but shown looking toward the left side of the chassis from over the power supply.
Here you can see the 16-port SAS-2 controller and the cables to the internal drives. As I mentioned above, one of the areas where I consider the Ci Design chassis to be better than the Supermicro one is the routing of the disk cables. With the Supermicro chassis it is very easy to end up with excess cable lengths stuffed somewhere that restricts airflow. I tried using various lengths of Supermicro SAS cables to work around this, but none of the stock lengths were a perfect fit. They were also rather bulky and tended to "spread out" when curved, taking up even more space in the chassis. I presume that most of Supermicro's customers use one of the expander backplanes, so only one or two cables would be required.
After some searching and getting quotes for custom-length SAS cables (with sideband), I found 3M's high routability cables. This is a type of cable that 3M developed for supercomputers and later adapted for use as SAS cabling. It is a flat, foldable cable that (optionally) incorporates SAS sidebands and is available pre-made in practically any length desired - sort of. While they have assigned part numbers for 175 different lengths (every 1 cm from 0.25 m to 2.00 m), there is only one stock length (0.5 m) available, and that isn't long enough for the 3rd and 4th backplane connectors. They'll gladly make any length you want, as long as you want a lot of them - depending on the distributor you ask, anywhere from 350 to 1000 pieces of the same length! I managed to track down the oddball lengths I required from a 3M customer who had them left over from a discontinued project.
As you can see, these cables are perfectly flat and there is no excess length anywhere. They do include the sideband pins, so the controller can communicate with the backplane to operate the locate / fault LED in each drive bay. I added the labels to show which ports they were connected to. At the far right of the picture you can see a piece of black Velcro holding the 4 cables together. This to keep them from sliding around where they go through the slotted rubber air dams below the fans. 3M also cautions that the side edges of the cables are conductive due to the way they are manufactured, so this prevents them from touching the motherboard.
In this final picture of the interior, you can see the unobstructed airflow and the overall organization of the cabling. At the extreme right of the chassis, in front of the VeloDrive SSD, you can see 3 silver SATA cables plugged into 3 of the motherboard's SATA ports. Ports 0 and 1 are for the rear-mounted 2.5" drive bays (below the power supplies, as you will see in a subsequent picture), one of which contains the Samsung SSD boot drive. Port 2 is for the DVD-RW drive at the front on the chassis (out of view at the top of this picture).
Continuing the methodology I've used since the original RAIDzilla over 10 years ago, there are a series of labels on the top of the case. The front label lists the hardware in the system while the center (and rear, not shown) labels caution against running the system with the cover off. This picture shows the RAIDzilla 2.5 with 16 2TB drives migrated from a RAIDzilla II.
Moving to a case with a front cover and badge holder let me create a case badge which combines the FreeBSD Beastie mascot with the RAIDzilla name. Although I designed the RAIDzilla as a commercial-grade product, it is strictly a hobby project and thus the use of Beastie falls within the creator's usage guidelines. One thing which made getting the case badges more difficult than usual is that the bezel's badge recess is not square - it is rectangular. Finding a badge manufacturer that was willing to make a small production run of a full-color badge in a non-standard size, with a clear raised dome and non-rounded corners, was more difficult than I expected. I selected Techiant to manufacture the RAIDzilla badges and have been very pleased with the service they provided.
Here is a closeup of the RAIDzilla Beastie badge on the bezel.
One of the 16 8TB drives in the RAIDzilla 2.5. Lots and lots of space! You can tell right away that this is a serious drive - it has a basic black-and-white label that just provides all of the information an enterprise integrator might need. The only graphics or logos on the drive are the ones required by approval agencies or the various standards that the drive complies with. The drives are HGST HUH728080AL4200 units, which are 8TB SAS-3 drives with a 4K Native format (instead of emulated 512 byte sectors). I selected the ISE (Instant Secure Erase) version based on availability and pricing - versions with other security options are less common and thus more expensive.
rdiff-backup has no understanding of modern filesystems - it works on files and that's all. Since ZFS has built-in snapshot capability (which takes care of daily increments) and has a built-in method for sending snapshots, it made sense to look for something that could use those native ZFS features. After looking at a large number of "does it all" replication packages (there are a surprisingly large number of packages which don't quite do everything I wanted), I selected zrep from Philip Brown. It provides all of the features I was looking for:
While I was still searching for a replication solution, I found a utility named bbcp. It claimed to provide wire-speed performance on network transfers. The latest version was also available in the FreeBSD Ports Collection. That would be great, except that the version in there doesn't work. It has all sorts of bizarre problems. I don't know if the problems were introduced in the upstream bbcp code or if the problems came from the changes needed for the FreeBSD port. The bbcp manpage is pretty enigmatic (second only to the code). It probably makes sense to high-energy physicists (the intended users), but I decided to "punt" and just install the last known working version, 20120520. For your convenience, I have made a kit which contains the files needed to build the old version as a FreeBSD port, as well as a pre-compiled binary (for FreeBSD 10.3 amd64) here. If you're at all conscious about security you won't use that archive, but will instead download the older version from the FreeBSD Ports Repo and compile your own copy instead.
With a working bbcp installed, after doing the usual magic to allow key-authenticated SSH sessions as root, I was able to perform a test replication from one 20TB pool to a second, empty pool. Performance was quite good - around 750Mbyte/sec. At this point, throughput is limited by disk performance, not network performance as you can see from this test of copying a data stream between two RAIDzilla 2.5 systems:
(0:1) srchost:~terry# bbcp -P 2 -s 8 /dev/zero desthost:/dev/null bbcp: Creating /dev/null/zero bbcp: 160620 06:06:45 0% done; 1.2 GB/s bbcp: 160620 06:06:47 0% done; 1.2 GB/s bbcp: 160620 06:06:49 0% done; 1.2 GB/s bbcp: 160620 06:06:51 0% done; 1.2 GB/s bbcp: 160620 06:06:53 0% done; 1.2 GB/s ^C
While building RAIDzilla-like systems for a few friends, I discovered another misfeature of bbcp. Many of these friends don't have a 10GbE network at their location, so I've been building pairs of RAIDzilla-like systems which use regular GigE for their link to the rest of the location, but which have a dedicated point-to-point 10GbE link (normally using Intel X520-DA1 cards and a DAC cable). When trying to run bbcp over the point-to-point link, I would receive a bizarre stream of error messages:
bbcp: Invalid argument obtaining address for 10g.rz2 bbcp: Invalid argument obtaining address for 10g.rz2 bbcp: No route to host unable to find 10g.rz2 bbcp: Unable to allocate more than 0 of 4 data streams.
I took a quick look through the bbcp source code and discovered that it can also use IP addresses as well as host names. To get past this problem without needing to make yet more changes to bbcp, I modified the replication script shown below to use IP addresses on the systems where this problem appears.
The next thing to do was to have zrep use bbcp as its transport protocol (it normally uses SSH, which has lots of overhead due to single-threaded encryption processing). The zrep author was quite amenable to this suggestion, and I soon had a bbcp-aware version of zrep that I could test with:
(0:2) srchost:/sysprog/terry# zrep init storage/data desthost storage/data Setting properties on storage/data Warning: zfs recv lacking -o readonly Creating readonly destination filesystem as separate step Creating snapshot storage/data@zrep-desthost_000000 Sending initial replication stream to desthost:storage/data bbcp: Creating zfs bbcp: 190126 04:52:23 not done; 968.0 MB/s bbcp: 190126 04:52:53 not done; 968.1 MB/s bbcp: 190126 04:53:23 not done; 968.1 MB/s bbcp: 190126 04:53:53 not done; 968.4 MB/s bbcp: 190126 04:54:23 not done; 968.6 MB/s bbcp: 190126 04:54:53 not done; 968.5 MB/s bbcp: 190126 04:55:23 not done; 968.7 MB/s bbcp: 190126 04:55:53 not done; 968.8 MB/s ... bbcp: 190126 09:46:23 not done; 909.7 MB/s bbcp: 190126 09:46:53 not done; 909.6 MB/s bbcp: 190126 09:47:23 not done; 909.5 MB/s bbcp: 190126 09:47:53 not done; 909.4 MB/s bbcp: 190126 09:48:23 not done; 909.4 MB/s bbcp: 190126 09:48:53 not done; 909.3 MB/s bbcp: 190126 09:49:23 not done; 909.2 MB/s bbcp: 190126 09:49:53 not done; 909.0 MB/s Initialization copy of storage/data to desthost:storage/data completeThis is essentially the same performance as s "bare metal" zfs send. The slow-down you see as the transfer progresses is due to physical performance limits on the disk drives - they transfer faster at the start of the disk and slower at the end, due to varying numbers of sectors per track. Adding zrep provides all of the additional features I listed above without slowing things down.
The above image, captured from my MRTG monitoring system, shows the network utilization on the switch port connected to the destination RAIDzilla. 17TB in 7½ hours - I like it!
Here is a list of useful commands configure and manage replication:
This is the script I use to perform daily replication. It is run automatically via a cron job:
#!/bin/sh # # Maintain the mirror of this server via replication # # 21-Jun-2016 - tmk - Convert rdiff-backup based do-mirror to use zrep # # Initialize necessary variables # BBCP="bbcp -s 8 -P 30" export BBCP SSH="ssh -q" export SSH # # We could just do a "zrep sync all", but we do them one at a time in # order to display the snapshots after each replication task. # # storage/data to desthost # ZREPTAG="zrep-desthost" export ZREPTAG #zrep init storage/data desthost storage/data (only use once when creating) zrep sync storage/data # echo "" echo "List of active snapshots on desthost:" echo "" $SSH desthost zfs list -r -t all -o name,creation,used,refer,written storage/data
Since I had multiple RAIDzilla systems with the same data on them, I timed the mirroring of the same amount of data from one 'zilla to a second using rdiff-backup and compared it with using zrep to replicate the same amount of changed data from a third 'zilla to a fourth:
rdiff-backup sync:
--------------[ Session statistics ]-------------- StartTime 1466417674.00 (Mon Jun 20 06:14:34 2016) EndTime 1466425374.67 (Mon Jun 20 08:22:54 2016) ElapsedTime 7700.67 (2 hours 8 minutes 20.67 seconds) SourceFiles 537363 SourceFileSize 19182758812100 (17.4 TB) MirrorFiles 537354 MirrorFileSize 19178697841009 (17.4 TB) NewFiles 14 NewFileSize 4207614724 (3.92 GB) DeletedFiles 5 DeletedFileSize 177445553 (169 MB) ChangedFiles 43 ChangedSourceSize 58980532620 (54.9 GB) ChangedMirrorSize 58949730700 (54.9 GB) IncrementFiles 63 IncrementFileSize 5193403474 (4.84 GB) TotalDestinationSizeChange 9254374565 (8.62 GB) Errors 0 --------------------------------------------------
zrep sync:
(0:331) srchost:/storage/data# time zrep sync all sending storage/data@zrep_000001 to desthost:storage/data bbcp: Creating zfs bbcp: 160620 08:30:48 not done; 671.0 MB/s bbcp: 160620 08:31:18 not done; 726.1 MB/s bbcp: 160620 08:31:48 not done; 734.5 MB/s Expiring zrep snaps on storage/data Also running expire on desthost:storage/data now... 4.614u 110.127s 1:36.49 118.9% 240+1039k 417+0io 0pf+0w
The rdiff-backup that took over two hours completed in one minute and 37 seconds with zrep. Quite an improvement!
Here is the nightly replication output after things have been running for over a month:
sending storage/data@zrep-desthost_000025 to desthost:storage/data bbcp: Creating zfs bbcp: 160726 01:00:30 not done; 456.8 MB/s bbcp: 160726 01:01:00 not done; 481.3 MB/s bbcp: 160726 01:01:30 not done; 480.7 MB/s Expiring zrep snaps on storage/data Also running expire on desthost:storage/data now... Expiring zrep snaps on storage/data List of active snapshots on desthost: NAME CREATION USED REFER WRITTEN storage/data Tue Jun 21 3:31 2016 17.1T 16.9T 0 storage/data@zrep-desthost_000007 Sun Jun 26 1:00 2016 8.29G 16.8T 16.8T storage/data@zrep-desthost_000008 Mon Jun 27 1:00 2016 79.9M 16.8T 8.31G storage/data@zrep-desthost_000009 Tue Jun 28 1:00 2016 80.0M 16.8T 20.0G storage/data@zrep-desthost_00000a Wed Jun 29 1:00 2016 599M 16.8T 27.6G storage/data@zrep-desthost_00000b Thu Jun 30 1:00 2016 140M 16.8T 6.91G storage/data@zrep-desthost_00000c Fri Jul 1 1:00 2016 1.91G 16.8T 33.8G storage/data@zrep-desthost_00000d Sat Jul 2 1:00 2016 89.0M 16.8T 20.2G storage/data@zrep-desthost_00000e Sun Jul 3 1:00 2016 87.5M 16.8T 3.13G storage/data@zrep-desthost_00000f Mon Jul 4 1:00 2016 87.2M 16.8T 8.31G storage/data@zrep-desthost_000010 Tue Jul 5 1:00 2016 87.2M 16.8T 19.5G storage/data@zrep-desthost_000011 Wed Jul 6 1:00 2016 87.2M 16.8T 9.75G storage/data@zrep-desthost_000012 Thu Jul 7 1:00 2016 80.1M 16.8T 1.42G storage/data@zrep-desthost_000013 Fri Jul 8 1:00 2016 80.0M 16.8T 12.6G storage/data@zrep-desthost_000014 Sat Jul 9 1:00 2016 80.3M 16.8T 8.94G storage/data@zrep-desthost_000015 Sun Jul 10 1:00 2016 80.3M 16.8T 130M storage/data@zrep-desthost_000016 Mon Jul 11 1:00 2016 80.5M 16.8T 8.51G storage/data@zrep-desthost_000017 Tue Jul 12 1:00 2016 80.5M 16.8T 19.5G storage/data@zrep-desthost_000018 Wed Jul 13 1:00 2016 80.4M 16.8T 7.98G storage/data@zrep-desthost_000019 Thu Jul 14 1:00 2016 87.3M 16.8T 1.47G storage/data@zrep-desthost_00001a Fri Jul 15 1:00 2016 86.4M 16.8T 12.6G storage/data@zrep-desthost_00001b Sat Jul 16 1:00 2016 86.5M 16.8T 15.4G storage/data@zrep-desthost_00001c Sun Jul 17 1:00 2016 87.6M 16.8T 264M storage/data@zrep-desthost_00001d Mon Jul 18 1:00 2016 87.1M 16.9T 32.4G storage/data@zrep-desthost_00001e Tue Jul 19 1:00 2016 85.6M 16.9T 19.8G storage/data@zrep-desthost_00001f Wed Jul 20 1:00 2016 83.4M 16.9T 8.04G storage/data@zrep-desthost_000020 Thu Jul 21 1:00 2016 81.0M 16.9T 1.90G storage/data@zrep-desthost_000021 Fri Jul 22 1:00 2016 80.5M 16.9T 12.6G storage/data@zrep-desthost_000022 Sat Jul 23 1:00 2016 79.9M 16.9T 7.77G storage/data@zrep-desthost_000023 Sun Jul 24 1:00 2016 79.9M 16.9T 80.0M storage/data@zrep-desthost_000024 Mon Jul 25 1:00 2016 79.9M 16.9T 8.33G storage/data@zrep-desthost_000025 Tue Jul 26 1:00 2016 0 16.9T 23.4G
As I mentioned earlier, I keep a month's worth of snapshots on the destination system. The nightly report shows the name of each snapshot as well as the amount of data consumed by each snapshot. As snapshots expire, the space they use (for files no longer on the system) is reclaimed.
My first idea was to simply use zfs send to copy the pool to tape. This would require minor modifications to a utility like dd in order to handle automatic tape changes. However, this had a number of serious drawbacks:
Because of this, I decided to continue using GNU tar, even though it is rather slow. This gave me the following benefits:
I expanded the backup script I had been using previously to include automatic loading of the next tape in the library, list the volume labels of each tape in the backup set, and to record the time when each tape was loaded (so I can see how long it takes to write each tape). This is the script I am using:
#!/bin/sh # # Backup the storage pool to the tape library # # 22-Jun-2016 - tmk - Initial version # 23-Jun-2016 - tmk - Allow various permutations of start / end processing # 03-Jul-2016 - tmk - Deal with both (ch0,passN) and (passN,ch0) in devlist # 17-Jul-2016 - tmk - Document "tape record bigger than supplied buffer" msg # 07-Jan-2019 - tmk - Un-break multi-word STARTTAPE / ENDTAPE values, fix # position-dependent gtar change of --exclude # # Backup a ZFS pool to a tape library. Block size of 512 produces a 256KB # record. This is chosen as a tradeoff between drive performance (which # increases up to the limit of 16MB) and wasted tape (since files less than # 256KB will still consume 256KB on the tape). # # Configuration options # # STARTTAPE can be "first", "next", "load N" or "" # ENDTAPE can be "unload", "next" or "" # Some useful start / end combinations are: # "first" / "unload" - Always start with first tape, unload when done # "load N" / "unload" - Start with specific tape, unload when done # "next" / "" - Load next tape at start, leave last tape in drive # "" / "next" - Use tape in drive at start, load next when done # If you change these values between backups, make sure you have the correct # tape in the drive, or that the drive is empty, as required. STARTTAPE="first" ENDTAPE="unload" FILESYS="/storage/data" EXCLUDES="/storage/data/unixbackups" # # IMPORTANT NOTES: # # 1) DO NOT USE the /dev/sa0.ctl device as suggested by the sa(4) manpage - # it has crashed the system when used here (under FreeBSD 8.x, possibly # fixed since then). # # 2) When restoring, you need to use --record-size=256K to avoid the dreaded # "tape record bigger than supplied buffer" error. For regular (non-GNU) # tar, the equivalent is --block-size=512. # # 3) The record size in this script (256KB, --blocking-factor=512) presumes # that a custom kernel with "options MAXPHYS=(256*1024)" is in use. If not, # you'll get errors like these: # nsa0: request size=262144 > si_iosize_max=131072; cannot split request # nsa0: request size=262144 > MAXPHYS=131072; cannot split request # # 4) Do *NOT* interrupt / background / kill the mtx process, even if it seems # to be hung. Since it sends raw CCBs to the library, you will probably # need to reboot both the FreeBSD system and the library in order to get # things working again. You have been warned! # # 5) You probably want to run this via the IPMI remote console (if available) # as a problem with your network connection can cause this script to be # HUP'd if you get disconnected. That will probably ruin your whole week. # # 6) The FreeBSD tape driver does a "taste test" when a tape is loaded. This # produces a bogus "tape record bigger than supplied buffer" kernel mes- # sage for every tape in a backup set. This was discussed on the mailing # list: http://tinyurl.com/tapebigbuf If this offends you, you can use # the SA_QUIRK_NODREAD quirk to suppress it. # # Find the device name of the media changer device # findpass() { PASS=`camcontrol devlist | grep '[(,]ch0'` if [ "$PASS" = "" ]; then exit 1 fi DEV=`echo "${PASS}" | \ sed -e 's/^.*(//' -e 's/).*//' -e 's/,//' | \ sed -re 's/ch[0-9]+//'` echo "/dev/${DEV}" } export CHANGER=`findpass` if [ "$CHANGER" = "" ]; then echo "Unable to find tape changer device - no library attached?" >& 2 exit 1 fi # echo "CHANGER is '$CHANGER'" # # Peek in the library and see what we have... # mtx status date # # Perform start-of-backup library handling # if [ "$STARTTAPE" ]; then mtx $STARTTAPE fi # # Back everything up, hoping we don't hit a write-protected tape or run # out of tapes. # gtar --create --multi-volume --verbose --blocking-factor=512 --file=/dev/nsa0 \ --exclude=$EXCLUDES $FILESYS \ --new-volume-script "date; mtx next" # # Perform end-of-backup libray handling # date if [ "$ENDTAPE" ]; then mtx $ENDTAPE fi # # And say we're done (hopefully successfully) # echo "All done!" exitThe script should be pretty self-explanatory. The code to locate the changer device is specific to FreeBSD, but you could just hard-code the changer device name.
The following information is specific to FreeBSD. While other operating systems have similar restrictions, the methods for discovering and dealing with them are likely quite different.
The IBM LTO6 drive I am using supports record sizes up to 8MB. An earlier version of this script for FreeBSD 8.4 used a record size of 4MB - or at least I thought it did. It turns out that FreeBSD was splitting the I/O request up into multiple chunks "behind my back". This behavior was changed in FreeBSD 10 to return an error by default. While there are tunables to restore the previous behavior, they are marked as temporary and are not available in FreeBSD 11 (released in October 2016).
Since I wasn't getting any benefit from the larger record size due to the transfer being split by the driver, I decided to use the largest block size the driver supported. For hardware handled by the mps driver, this is 256KB (defined by cpi->maxio in /sys/dev/mps/mps_sas.c). I had some discussions with the driver maintainer and the result was revision r303089 which calculated the correct maximum transfer size supported by the controller instead of using a fixed 256KB limit. For the controllers I am using, this increases the maximum supported block size to 4.5MB.
Unfortunately, that by itself was not sufficient as the kernel has a hard-coded limit in the MAXPHYS parameter. I needed to compile a custom kernel in order to change this limit:
# # RAIDZILLA -- Kernel configuration file for RAIDzillas # # NOTE: We could "nodevice" all the stuff we don't use, but in this day # and age, that's kind of silly if the only goal is to save space # in the kernel. The only reason we have a custom kernel is to # override some sub-optimal defaults in the GENERIC kernel. # include GENERIC ident RAIDZILLA # # Increase maximum size of Raw I/O (for tape drives). Older kernel ver- # sions are limited to 256KB, since larger values will be constrained by # si_iosize_max. This restriction was lifted for mpr/mps in r303089. # options MAXPHYS=(1024*1024)Here is the output of the tape backup script (I've removed the list of filenames on each tape):
(0:1) rz1:/sysprog/terry# ./tape-backup Storage Changer /dev/pass5:1 Drives, 47 Slots ( 3 Import/Export ) Data Transfer Element 0:Empty Storage Element 1:Full :VolumeTag=TMK500L6 Storage Element 2:Full :VolumeTag=TMK501L6 Storage Element 3:Full :VolumeTag=TMK502L6 Storage Element 4:Full :VolumeTag=TMK503L6 Storage Element 5:Full :VolumeTag=TMK504L6 Storage Element 6:Full :VolumeTag=TMK505L6 Storage Element 7:Full :VolumeTag=TMK506L6 Storage Element 8:Full :VolumeTag=TMK507L6 Storage Element 9:Full :VolumeTag=TMK508L6 Storage Element 10:Full :VolumeTag=TMK509L6 Storage Element 11:Full :VolumeTag=TMK510L6 Storage Element 12:Full :VolumeTag=TMK511L6 Storage Element 13:Full :VolumeTag=TMK512L6 Storage Element 14:Full :VolumeTag=TMK513L6 Storage Element 15:Full :VolumeTag=TMK514L6 Storage Element 16:Full :VolumeTag=TMK515L6 Storage Element 17:Full :VolumeTag=TMK516L6 Storage Element 18:Full :VolumeTag=TMK517L6 Storage Element 19:Full :VolumeTag=TMK518L6 Storage Element 20:Full :VolumeTag=TMK519L6 Storage Element 21:Full :VolumeTag=TMK520L6 Storage Element 22:Full :VolumeTag=TMK521L6 Storage Element 23:Full :VolumeTag=TMK522L6 Storage Element 24:Full :VolumeTag=TMK523L6 Storage Element 25:Full :VolumeTag=TMK524L6 Storage Element 26:Full :VolumeTag=TMK525L6 Storage Element 27:Full :VolumeTag=TMK526L6 Storage Element 28:Full :VolumeTag=TMK527L6 Storage Element 29:Full :VolumeTag=TMK528L6 Storage Element 30:Full :VolumeTag=TMK529L6 Storage Element 31:Full :VolumeTag=TMK530L6 Storage Element 32:Full :VolumeTag=TMK531L6 Storage Element 33:Full :VolumeTag=TMK532L6 Storage Element 34:Full :VolumeTag=TMK533L6 Storage Element 35:Full :VolumeTag=TMK534L6 Storage Element 36:Full :VolumeTag=TMK535L6 Storage Element 37:Full :VolumeTag=TMK536L6 Storage Element 38:Full :VolumeTag=TMK537L6 Storage Element 39:Full :VolumeTag=TMK538L6 Storage Element 40:Full :VolumeTag=TMK539L6 Storage Element 41:Full :VolumeTag=TMK540L6 Storage Element 42:Full :VolumeTag=TMK541L6 Storage Element 43:Full :VolumeTag=TMK542L6 Storage Element 44:Full :VolumeTag=TMK543L6 Storage Element 45 IMPORT/EXPORT:Empty Storage Element 46 IMPORT/EXPORT:Empty Storage Element 47 IMPORT/EXPORT:Empty Mon Jul 25 23:00:07 EDT 2016 Loading media from Storage Element 1 into drive 0...done gtar: Removing leading `/' from member names Tue Jul 26 07:24:05 EDT 2016 Unloading drive 0 into Storage Element 1...done Loading media from Storage Element 2 into drive 0...done Tue Jul 26 15:24:24 EDT 2016 Unloading drive 0 into Storage Element 2...done Loading media from Storage Element 3 into drive 0...done Tue Jul 26 21:04:37 EDT 2016 Unloading drive 0 into Storage Element 3...done Loading media from Storage Element 4 into drive 0...done Wed Jul 27 03:23:42 EDT 2016 Unloading drive 0 into Storage Element 4...done Loading media from Storage Element 5 into drive 0...done Wed Jul 27 09:30:57 EDT 2016 Unloading drive 0 into Storage Element 5...done Loading media from Storage Element 6 into drive 0...done Wed Jul 27 15:19:07 EDT 2016 Unloading drive 0 into Storage Element 6...done Loading media from Storage Element 7 into drive 0...done Wed Jul 27 21:26:58 EDT 2016 Unloading drive 0 into Storage Element 7...done Loading media from Storage Element 8 into drive 0...done Wed Jul 27 23:10:21 EDT 2016 Unloading drive 0 into Storage Element 8...done All done!
The LTO6 drive takes about 3 times as long to fill a tape as the older LTO4 drive, but it also holds 3 times the data. I'm not getting the absolute peak performance out of the drive - the IBM LTO6 HH drive specs say that the drive's performance with uncompressable data (which describes most of my data) is 576GB/hour. That means that filling a tape at the rated speed should take about 4 hours and 20 minutes. The above list of load times shows that it takes about 6 to 8 hours for the backup script to fill an entire LTO6 tape. The variation might be due to other usage on the RAIDzilla at the same time, or due to varying file sizes in different directories. This is something that could be investigated in the future.
Here are two cases of the older LTO4 backup tapes, ready to be moved to secure offsite storage in a climate-controlled vault in another state. Note the tamper seals on the left sides of the cases. Also in the first case is a CD-ROM with the backup directory listing of the tapes, so I can locate which tape a file is on instead of needing to read all of the tapes.
Here is a new 128TB (85TB usable) RAIDzilla 2.5, mounted above an older 32TB (21TB usable) RAIDzilla II. Note the gold-colored horizontal support bar I mentioned earlier. The rack ears were subsequently modified to add additional mounting slots and the support bar was then removed.
This is the RAIDzilla 2.5 from the previous picture with its bezel removed to show you the 16 * 8TB drives. While it appears that all of the front panel LEDs are illuminated, that is a reflection from the camera flash. Only the power and network activity LEDs are actually on. The drive carriers have their blue LEDs lit (drive present) and their red LEDs (fault) off.
Here you see a picture of the rear of the same installed RAIDzilla 2.5. On the left side, below the power supplies, you can see the two 2.5" hot-swap drive bays. The lower tray holds the Samsung SSD with the operating system installed on it. The upper tray is empty, other than having a dummy blank in it for airflow purposes. The green cable is a serial console connection. To the left of the green cable is the remote access Ethernet (top) and USB keyboard / mouse (bottom). The blue connector is the VGA video connector. It and the USB keyboard / mouse connect to a KVM switch that serves all of the systems in the racks. The leftmost expansion slot has a SAS cable connecting the RAIDzilla to the IBM TS3200 automated tape library. The cable in the middle expansion slot is the 10GbE connection to the rest of the equipment in the racks, via a 24-port 10GbE switch.
RAIDzilla II | RAIDzilla 2.5 | RAIDzilla II -> 2.5 Upgrade | RAIDzilla 2.5 -> 2.75 Upgrade | |||||||||||
Feb 2013 | Jan 2016 | Jan 2016 | Jan 2019 | |||||||||||
Part Number | Manufacturer | Qty. | Price (each) | Price (total) | Note(s) | Price (each) | Price (total) | Note(s) | Price (each) | Price (total) | Note(s) | Price (each) | Price (total) | Note(s) |
NSR 316 | Ci Design | 1 | $920 | $920 | [1] | |||||||||
CSE-836BA-R920B | Supermicro | 1 | $956 | $956 | $956 | $956 | ||||||||
MCP-220-83605-0N (2.5 bay) | Supermicro | 1 | $55 | $55 | $55 | $55 | ||||||||
MCP-220-81502-0N (DVD kit) | Supermicro | 1 | $17 | $17 | $17 | $17 | ||||||||
MCP-210-83601-0B (bezel) | Supermicro | 1 | $15 | $15 | [2] | $15 | $15 | [2] | ||||||
X8DTH-iF | Supermicro | 1 | $245 | $245 | [2] | $150 | $150 | [3] | ||||||
E5520 | Intel | 2 | $75 | $150 | [3] | |||||||||
E5620 | Intel | 2 | $10 | $20 | [3] | $10 | $20 | [3] | ||||||
X5680 | Intel | 2 |   |   |   |   |   |   | $34 | $68 | [3] | |||
STS100C | Intel | 2 | $32 | $64 | $35 | $70 | [2] | |||||||
HMT31GR7AFR4C-H9 | Hynix | 6/12 | $68 | $408 | [3,4] | $25 | $300 | [3,4] | $25 | $150 | [3,4] | |||
OCZSSDPX-ZD2P84256G | OCZ Technology | 1 | $1200 | $1200 | [5] | [5] | ||||||||
VD-HHPX8-300G | OCZ Technology | 1 | $250 | $250 | [3] | |||||||||
SSDPED1D280GASX | Intel | 1 | $352 | $352 | ||||||||||
9650SE-16ML | 3Ware | 1 | $780 | $780 | ||||||||||
BBU-MODULE-04 | 3Ware | 1 | $124 | $124 | [2] | |||||||||
CBL-SFF8087-05M | 3Ware | 4 | $10 | $40 | [2] | |||||||||
9201-16i | LSI / Avago | 1 | $330 | $330 | $330 | $330 | ||||||||
8F36 cable (various lengths) | 3M | 4 | $15 | $60 | $15 | $60 | ||||||||
DL-8A4S DVD | LITEON | 1 | $50 | $50 | [1] | $40 | $40 | [1] | ||||||
SAS 5/E HBA | Dell | 1 | $50 | $50 | [2] | |||||||||
6Gbps SAS HBA | Dell | 1 | $85 | $85 | [2] | $85 | $85 | [2] | ||||||
X540-T1 (10GbE) | Intel | 1 | $300 | $300 | $300 | $300 | ||||||||
WD3200BEKT (boot drives) | Western Digital | 2 | $60 | $120 | ||||||||||
MZ-7KE256BW (850 Pro SSD) | Samsung | 1 | $117 | $117 | $117 | $117 | ||||||||
Miscellaneous | Cables / labels / etc. | 1 | $50 | $50 | $50 | $50 | $50 | $50 | ||||||
Total Cost | $4201 | $2815 | $2155 | $420 |
As you can see from the table above, upgrading a RAIDzilla II to a 2.5 costs almost the same as building a RAIDzilla 2.5 from scratch (excluding the 16 data storage drives). If the OCZ SSD and the memory are re-used from an old RAIDzilla II, building a new 2.5 costs only $265 more than upgrading a RAIDzilla II. This is the total of the current prices for the X8DTH-iF motherboard, the two STS100C heatsinks and the DVD drive. In fact, the first RAIDzilla 2.5 was built from scratch, using spare parts for the OCZ SSD and 48GB of memory, so I would have a proof-of-concept system to run extended tests on before taking any of my production RAIDzilla II's out of service for an upgrade.
As a quick test of the new hardware, I did some real-world copies between the RAIDzilla 2.5 and a Windows 7 client machine (a Dell Optiplex 9020 with a Samsumg 850 EVO SATA SSD and an Intel X540-T1 10GbE network card). The files were served via Samba. Bear in mind that this is without any performance tuning whatsoever.
As you can see, this real-world copy operation achived over 600MB/second, even while the RAIDzilla was serving requests from other clients. With this result, I didn't even bother with any performance tuning!
Since one of the main reasons I did this upgrade was to get cooler and more efficient power supplies, the first thing I did was put the system under heavy load and check the performance of the power supplies. I started a ZFS scrub operation (to perform heavy I/O to all of the disk drives):
(0:1) hostname:/sysprog/terry# zpool status pool: data state: ONLINE scan: scrub in progress since Sun Feb 7 06:24:58 2016 6.51T scanned out of 20.0T at 1.23G/s, 3h21m to go 0 repaired, 32.50% done
1.23 gigabytes/second. Not bad at all...
The following temperature, fan, and power measurements were performed using 16 * 2TB drives in both the RAIDzilla II and 2.5 systems for an "apples to apples" comparison. As the disk drives consume a relatively large portion of the power, measurements comparing a system with 16 * 8TB drives to one with 16 * 2TB drives would not be meaningful.
I then spot-checked some chassis temperatures with a Fluke 62 MAX+ digital thermometer, held at a fixed distance from each chassis and obtained the following readings:
System | Idle Upper PS Temp | Idle Lower PS Temp | Idle Exhaust Temp | Active Upper PS Temp | Active Lower PS Temp | Active Exhaust Temp |
RAIDzilla II | 98.1°F | 90.8°F | 86.9°F | TBD°F | TBD°F | TBD°F |
RAIDzilla 2.5 | 87.6°F | 88.1°F | 77.0°F | TBD°F | TBD°F | TBD°F |
The Supermicro power supplies also report a complete set of sensors to the monitoring software. [These temperatures are from the same system being measured in the above table, but were recorded at a different time.]
[SlaveAddress = 78h] [Module 1] Item | Value ---- | ----- Status | [STATUS OK](00h) Input Voltage | 121.5 V Input Current | 1.78 A Main Output Voltage | 12.01 V Main Output Current | 14.25 A Temperature 1 | 36C/97F Temperature 2 | 45C/113F Fan 1 | 2272 RPM Fan 2 | 3296 RPM Main Output Power | 171 W Input Power | 213 W PMBus Revision | 0x8B22 PWS Serial Number | P9212CF26ATxxxx PWS Module Number | PWS-920P-SQ PWS Revision | REV1.1 [SlaveAddress = 7Ah] [Module 2] Item | Value ---- | ----- Status | [STATUS OK](00h) Input Voltage | 125.0 V Input Current | 1.73 A Main Output Voltage | 12.05 V Main Output Current | 15.37 A Temperature 1 | 35C/95F Temperature 2 | 43C/109F Fan 1 | 2400 RPM Fan 2 | 3424 RPM Main Output Power | 185 W Input Power | 208 W PMBus Revision | 0x8D22 PWS Serial Number | P9212CF26ATxxxx PWS Module Number | PWS-920P-SQ PWS Revision | REV1.1
This contrasts the 7 (or 13 on the RAIDzilla 2.5) temperature sensors of two RAIDzillas, installed side-by-side in a pair of racks. The upper graph is the RAIDzilla II, the lower graph is the RAIDzilla 2.5. Despite having twice the memory and faster CPUs, the measurements in the RAIDzilla 2.5 are 7°F cooler than the RAIDzilla II's. The improvement in cooling efficiency is even greater than this graph shows, due to the lower fan speeds as explained below.
Due to the improved efficiency of the power supplies in the RAIDzilla 2.5, I changed the fan speed setting from "balanced" (a tradeoff between cooling and power usage) to "energy saving" (lowest possible speed). With either setting, the fan speed is controlled by various thermal sensors on the motherboard. In energy saving mode, the speed is reduced to the lowest speed that still provides adequate cooling, and the system will change the fan speed frequently in order to maintain the target temperature.
In some situations the frequent fan speed changes can be annoying to the user as it is very obvious when the fan speeds change. However, the RAIDzillas are installed in racks in a dedicated server room, so that is not a factor here. Most users of this chassis would probably install it somewhere they cannot hear it, as it is somewhat noisy regardless of the fan speed.
On the RAIDzilla II, the various fan groups run at different speeds - 6500 RPM for the rear exhaust fans, 4900-5100 RPM for the drive bay fans, and 4200-4300 RPM for the CPU fans. On the RAIDzilla 2.5, the rear exhaust and drive bay fans rotate at 3100-4600 RPM and the CPU fans at 1900-4000 RPM. The largest change from the RAIDzilla 2 is the CPU fan speed. That is due to the cooling shroud which directs air over the CPU and memory modules. The highest fan speed in the RAIDzilla 2.5 is 1900 RPM lower than in the RAIDzilla II.
Even with these greatly-reduced fan speeds, the temperature measurements in the RAIDzilla 2.5 are substantially lower than the ones in the RAIDzilla II, as shown in the earlier graphs.
These graphs show the power supply temperatures in the RAIDzilla II and RAIDzilla 2.5. This data is collected by polling the PMBus and its accuracy is not guaranteed. Variations in components in the power supplies may affect the readings - for example, the input voltage for the two power supplies in the RAIDzilla 2.5 shows a 3.5V difference, despite both power supplies being plugged into the same UPS.
However, there is a dramatic difference between the temperature of the Ci Design power supplies (upper graph) and the Supermicro ones (lower graph). Note that the Ci Design supplies report the exhaust temperatures as Temp 1 and the inlet temperatures as Temp 2, while the Supermicro does the opposite.
Thes graphs compare the power consumption in Watts between the Ci Design chassis (top graph) and the Supermicro chassis (bottom graph). As you can see, the Ci Design chassis is pulling a combined input power of between 440W and 480W, while the Supermicro chassis ranges from 340W to 370W. That is a pretty substantial difference, particularly when you consider that the RAIDzilla has an additional 48GB of memory and faster CPUs. It also appears that the Supermicro power supplies do a better job of sharing the output load between them - compare the spacing between the red and green lines (input wattage of the two power supplies) on the Ci Design vs. the Supermicro.
I am not sure what is causing the repeating "sawtooth" pattern in both the Ci Design and Supermicro power supplies. This data is collected using the SMCIPMITool utility as it is not available via the general IPMI sensor data. Perhaps this is a rounding or scaling issue in that utility.
Comparing these two graphs reveals something surprising to me - the Ci Design power supplies don't appear to be grossly less efficient than the Supermicro ones. The Ci Design ones seem to average 86% to 88% efficiency and appear to both move in the same direction (if PS 1 decreases its efficiency, PS 2 will also decrease its efficiency to about the same amount. We see the opposite behavior with the Supermicro supplies - the combined efficiency is a pretty straight line at 90%. When one Supermicro power supply reduces its efficiency, the other one will increase its efficiency to compensate, which produces a very different appearance on the graph when compared to the Ci Design.
This would seem to indicate that the "sweet spot" for efficiency on the Supermicro power supplies is at a higher load percentage than the RAIDzilla 2.5 generates, at least during idle conditions such as when these graphs were recorded.
There is another possibility, in that I may have unintentionally magnified errors in the PMBus readings. There is no "Efficiency" data available via PMBus, so I am dividing output Watts by input Watts and multiplying by 100 to generate an estimated efficiency. For the combined efficiency, I take the computed efficiency for each power supply, add them together and then divide by 2.
Regardless of the accuracy of the efficiency calculations, we do see that the RAIDzilla 2.5 idle power consumption is at least 100 Watts less than the idle power consumption on the RAIDzilla II. We have also seen how much more heat is generated by the Ci Design power supplies, compared to the Supermicro supplies. The difference is something like 50°F between the hottest point in the CI Design power supply vs. the Supermicro. That heat has to go somewhere - first it gets exhausted by chassis fans running at a higher speed, and then additional room air conditioning is needed to carry the extra heat away
Overall, I've been quite pleased with the design and performance of the Supermicro chassis and I will go ahead and convert my remaining RAIDzilla II units to RAIDzilla 2.5 versions as time permits.
As part of the RAIDzilla 2.5 refresh, I started thinking about what I'd like to do for a potential future RAIDzilla III. The Supermicro X10DRH-CT looks like a possible motherboard for a RAIDzilla III. It has on-board SAS3 support (8 ports) as well as a pair of Intel X540 10GbE ports, so I would not need to use expansion cards for those functions. I would need to use an expander version of the backplane as this motherboard only has 8 SAS-3 ports, but with the speed being doubled to 12Gbps on each of those ports, that should not be a problem. This motherboard can have up to 1TB of RAM installed (16 * 64GB modules).
For the computer geeks out there, this is a "dmesg" output of the system booting up, listing the installed hardware:
Copyright (c) 1992-2019 The FreeBSD Project. Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994 The Regents of the University of California. All rights reserved. FreeBSD is a registered trademark of The FreeBSD Foundation. FreeBSD 12.0-STABLE #0 r343485: Sat Jan 26 22:09:27 EST 2019 terry@rz1.glaver.org:/usr/obj/usr/src/amd64.amd64/sys/RAIDZILLA amd64 FreeBSD clang version 6.0.1 (tags/RELEASE_601/final 335540) (based on LLVM 6.0.1) VT(vga): resolution 640x480 CPU: Intel(R) Xeon(R) CPU X5680 @ 3.33GHz (3333.53-MHz K8-class CPU) Origin="GenuineIntel" Id=0x206c2 Family=0x6 Model=0x2c Stepping=2 Features=0xbfebfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE> Features2=0x29ee3ff<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,DCA,SSE4.1,SSE4.2,POPCNT,AESNI> AMD Features=0x2c100800<SYSCALL,NX,Page1GB,RDTSCP,LM> AMD Features2=0x1<LAHF> VT-x: PAT,HLT,MTF,PAUSE,EPT,UG,VPID TSC: P-state invariant, performance statistics real memory = 103083409408 (98308 MB) avail memory = 100171018240 (95530 MB) Event timer "LAPIC" quality 600 ACPI APIC Table: <SUPERM APIC1635> FreeBSD/SMP: Multiprocessor System Detected: 12 CPUs FreeBSD/SMP: 2 package(s) x 6 core(s) random: unblocking device. ioapic0: Changing APIC ID to 1 ioapic1: Changing APIC ID to 3 ioapic2: Changing APIC ID to 5 ioapic0 <Version 2.0> irqs 0-23 on motherboard ioapic1 <Version 2.0> irqs 24-47 on motherboard ioapic2 <Version 2.0> irqs 48-71 on motherboard Launching APs: 5 10 9 1 4 3 2 6 7 8 11 Timecounter "TSC-low" frequency 1666762732 Hz quality 1000 random: entropy device external interface kbd1 at kbdmux0 vtvga0: <VT VGA driver> on motherboard cryptosoft0: <software crypto> on motherboard aesni0: <AES-CBC,AES-XTS,AES-GCM,AES-ICM> on motherboard acpi0: <SMCI > on motherboard acpi0: Overriding SCI from IRQ 9 to IRQ 20 acpi0: Power Button (fixed) cpu0: <ACPI CPU> on acpi0 attimer0: <AT timer> port 0x40-0x43 irq 0 on acpi0 Timecounter "i8254" frequency 1193182 Hz quality 0 Event timer "i8254" frequency 1193182 Hz quality 100 atrtc0: <AT realtime clock> port 0x70-0x71 irq 8 on acpi0 atrtc0: registered as a time-of-day clock, resolution 1.000000s Event timer "RTC" frequency 32768 Hz quality 0 hpet0: <High Precision Event Timer> iomem 0xfed00000-0xfed003ff on acpi0 Timecounter "HPET" frequency 14318180 Hz quality 950 Event timer "HPET" frequency 14318180 Hz quality 350 Event timer "HPET1" frequency 14318180 Hz quality 340 Event timer "HPET2" frequency 14318180 Hz quality 340 Event timer "HPET3" frequency 14318180 Hz quality 340 Timecounter "ACPI-fast" frequency 3579545 Hz quality 900 acpi_timer0: <24-bit timer at 3.579545MHz> port 0x808-0x80b on acpi0 pcib0: <ACPI Host-PCI bridge> port 0xcf8-0xcff numa-domain 0 on acpi0 pci0: <ACPI PCI bus> numa-domain 0 on pcib0 pcib1: <ACPI PCI-PCI bridge> at device 1.0 numa-domain 0 on pci0 pci1: <ACPI PCI bus> numa-domain 0 on pcib1 igb0: <Intel(R) PRO/1000 PCI-Express Network Driver> port 0xdc00-0xdc1f mem 0xfb9e0000-0xfb9fffff,0xfb9c0000-0xfb9dffff,0xfb99c000-0xfb99ffff irq 28 at device 0.0 numa-domain 0 on pci1 igb0: attach_pre capping queues at 8 igb0: using 1024 tx descriptors and 1024 rx descriptors igb0: msix_init qsets capped at 8 igb0: pxm cpus: 6 queue msgs: 9 admincnt: 1 igb0: using 6 rx queues 6 tx queues igb0: Using MSIX interrupts with 7 vectors igb0: allocated for 6 tx_queues igb0: allocated for 6 rx_queues igb0: Ethernet address: 00:25:90:xx:xx:xx igb0: netmap queues/slots: TX 6/1024, RX 6/1024 igb1: <Intel(R) PRO/1000 PCI-Express Network Driver> port 0xd800-0xd81f mem 0xfb920000-0xfb93ffff,0xfb900000-0xfb91ffff,0xfb8dc000-0xfb8dffff irq 40 at device 0.1 numa-domain 0 on pci1 igb1: attach_pre capping queues at 8 igb1: using 1024 tx descriptors and 1024 rx descriptors igb1: msix_init qsets capped at 8 igb1: pxm cpus: 6 queue msgs: 9 admincnt: 1 igb1: using 6 rx queues 6 tx queues igb1: Using MSIX interrupts with 7 vectors igb1: allocated for 6 tx_queues igb1: allocated for 6 rx_queues igb1: Ethernet address: 00:25:90:xx:xx:xx igb1: netmap queues/slots: TX 6/1024, RX 6/1024 pcib2: <ACPI PCI-PCI bridge> at device 3.0 numa-domain 0 on pci0 pci2: <ACPI PCI bus> numa-domain 0 on pcib2 ix0: <Intel(R) PRO/10GbE PCI-Express Network Driver> mem 0xf8e00000-0xf8ffffff,0xf8dfc000-0xf8dfffff irq 24 at device 0.0 numa-domain 0 on pci2 ix0: using 2048 tx descriptors and 2048 rx descriptors ix0: msix_init qsets capped at 16 ix0: pxm cpus: 6 queue msgs: 63 admincnt: 1 ix0: using 6 rx queues 6 tx queues ix0: Using MSIX interrupts with 7 vectors ix0: allocated for 6 queues ix0: allocated for 6 rx queues ix0: Ethernet address: a0:36:9f:xx:xx:xx ix0: PCI Express Bus: Speed 5.0GT/s Width x8 ix0: netmap queues/slots: TX 6/2048, RX 6/2048 pcib3: <ACPI PCI-PCI bridge> at device 5.0 numa-domain 0 on pci0 pci3: <ACPI PCI bus> numa-domain 0 on pcib3 pcib4: <ACPI PCI-PCI bridge> at device 7.0 numa-domain 0 on pci0 pci4: <ACPI PCI bus> numa-domain 0 on pcib4 nvme0: <Generic NVMe Device> mem 0xfbdec000-0xfbdeffff irq 30 at device 0.0 numa-domain 0 on pci4 nvd0: <INTEL SSDPED1D280GA> NVMe namespace nvd0: 267090MB (547002288 512 byte sectors) pcib5: <ACPI PCI-PCI bridge> at device 9.0 numa-domain 0 on pci0 pci5: <ACPI PCI bus> numa-domain 0 on pcib5 pci0: <base peripheral, interrupt controller> at device 20.0 (no driver attached) pci0: <base peripheral, interrupt controller> at device 20.1 (no driver attached) pci0: <base peripheral, interrupt controller> at device 20.2 (no driver attached) pci0: <base peripheral, interrupt controller> at device 20.3 (no driver attached) ioat0: <TBG IOAT Ch0> mem 0xfbef8000-0xfbefbfff irq 43 at device 22.0 numa-domain 0 on pci0 ioat0: Capabilities: 77<Block_Fill,Move_CRC,DCA,Marker_Skipping,CRC,Page_Break> ioat1: <TBG IOAT Ch1> mem 0xfbef4000-0xfbef7fff irq 44 at device 22.1 numa-domain 0 on pci0 ioat1: Capabilities: 77<Block_Fill,Move_CRC,DCA,Marker_Skipping,CRC,Page_Break> ioat2: <TBG IOAT Ch2> mem 0xfbef0000-0xfbef3fff irq 45 at device 22.2 numa-domain 0 on pci0 ioat2: Capabilities: 77<Block_Fill,Move_CRC,DCA,Marker_Skipping,CRC,Page_Break> ioat3: <TBG IOAT Ch3> mem 0xfbeec000-0xfbeeffff irq 46 at device 22.3 numa-domain 0 on pci0 ioat3: Capabilities: 77<Block_Fill,Move_CRC,DCA,Marker_Skipping,CRC,Page_Break> ioat4: <TBG IOAT Ch4> mem 0xfbee8000-0xfbeebfff irq 43 at device 22.4 numa-domain 0 on pci0 ioat4: Capabilities: 77<Block_Fill,Move_CRC,DCA,Marker_Skipping,CRC,Page_Break> ioat5: <TBG IOAT Ch5> mem 0xfbee4000-0xfbee7fff irq 44 at device 22.5 numa-domain 0 on pci0 ioat5: Capabilities: 77<Block_Fill,Move_CRC,DCA,Marker_Skipping,CRC,Page_Break> ioat6: <TBG IOAT Ch6> mem 0xfbee0000-0xfbee3fff irq 45 at device 22.6 numa-domain 0 on pci0 ioat6: Capabilities: 77<Block_Fill,Move_CRC,DCA,Marker_Skipping,CRC,Page_Break> ioat7: <TBG IOAT Ch7> mem 0xfbedc000-0xfbedffff irq 46 at device 22.7 numa-domain 0 on pci0 ioat7: Capabilities: 77<Block_Fill,Move_CRC,DCA,Marker_Skipping,CRC,Page_Break> uhci0: <Intel 82801JI (ICH10) USB controller USB-D> port 0xbf80-0xbf9f irq 16 at device 26.0 numa-domain 0 on pci0 uhci0: LegSup = 0x2f00 usbus0 numa-domain 0 on uhci0 usbus0: 12Mbps Full Speed USB v1.0 uhci1: <Intel 82801JI (ICH10) USB controller USB-E> port 0xbf40-0xbf5f irq 21 at device 26.1 numa-domain 0 on pci0 uhci1: LegSup = 0x2f00 usbus1 numa-domain 0 on uhci1 usbus1: 12Mbps Full Speed USB v1.0 uhci2: <Intel 82801JI (ICH10) USB controller USB-F> port 0xbf20-0xbf3f irq 19 at device 26.2 numa-domain 0 on pci0 uhci2: LegSup = 0x2f00 usbus2 numa-domain 0 on uhci2 usbus2: 12Mbps Full Speed USB v1.0 ehci0: <Intel 82801JI (ICH10) USB 2.0 controller USB-B> mem 0xfbeda000-0xfbeda3ff irq 18 at device 26.7 numa-domain 0 on pci0 usbus3: EHCI version 1.0 usbus3 numa-domain 0 on ehci0 usbus3: 480Mbps High Speed USB v2.0 uhci3: <Intel 82801JI (ICH10) USB controller USB-A> port 0xbf00-0xbf1f irq 23 at device 29.0 numa-domain 0 on pci0 uhci3: LegSup = 0x2f00 usbus4 numa-domain 0 on uhci3 usbus4: 12Mbps Full Speed USB v1.0 uhci4: <Intel 82801JI (ICH10) USB controller USB-B> port 0xbec0-0xbedf irq 19 at device 29.1 numa-domain 0 on pci0 uhci4: LegSup = 0x2f00 usbus5 numa-domain 0 on uhci4 usbus5: 12Mbps Full Speed USB v1.0 uhci5: <Intel 82801JI (ICH10) USB controller USB-C> port 0xbea0-0xbebf irq 18 at device 29.2 numa-domain 0 on pci0 uhci5: LegSup = 0x2f00 usbus6 numa-domain 0 on uhci5 usbus6: 12Mbps Full Speed USB v1.0 ehci1: <Intel 82801JI (ICH10) USB 2.0 controller USB-A> mem 0xfbed8000-0xfbed83ff irq 23 at device 29.7 numa-domain 0 on pci0 usbus7: EHCI version 1.0 usbus7 numa-domain 0 on ehci1 usbus7: 480Mbps High Speed USB v2.0 pcib6: <ACPI PCI-PCI bridge> at device 30.0 numa-domain 0 on pci0 pci6: <ACPI PCI bus> numa-domain 0 on pcib6 vgapci0: <VGA-compatible display> mem 0xf9000000-0xf9ffffff,0xfaffc000-0xfaffffff,0xfb000000-0xfb7fffff irq 16 at device 4.0 numa-domain 0 on pci6 vgapci0: Boot video device isab0: <PCI-ISA bridge> at device 31.0 numa-domain 0 on pci0 isa0: <ISA bus> numa-domain 0 on isab0 ahci0: <Intel ICH10 AHCI SATA controller> port 0xbff0-0xbff7,0xbfac-0xbfaf,0xbfe0-0xbfe7,0xbfa8-0xbfab,0xbe80-0xbe9f mem 0xfbed6000-0xfbed67ff irq 19 at device 31.2 numa-domain 0 on pci0 ahci0: AHCI v1.20 with 6 3Gbps ports, Port Multiplier not supported ahcich0: <AHCI channel> at channel 0 on ahci0 ahcich1: <AHCI channel> at channel 1 on ahci0 ahcich2: <AHCI channel> at channel 2 on ahci0 ahcich3: <AHCI channel> at channel 3 on ahci0 ahcich4: <AHCI channel> at channel 4 on ahci0 ahcich5: <AHCI channel> at channel 5 on ahci0 ahciem0: <AHCI enclosure management bridge> on ahci0 pcib7: <ACPI Host-PCI bridge> numa-domain 1 on acpi0 pci7: <ACPI PCI bus> numa-domain 1 on pcib7 pcib8: <PCI-PCI bridge> at device 0.0 numa-domain 1 on pci7 pci8: <PCI bus> numa-domain 1 on pcib8 pcib9: <ACPI PCI-PCI bridge> at device 1.0 numa-domain 1 on pci7 pci9: <ACPI PCI bus> numa-domain 1 on pcib9 pcib10: <ACPI PCI-PCI bridge> at device 3.0 numa-domain 1 on pci7 pci10: <ACPI PCI bus> numa-domain 1 on pcib10 mps0: <Avago Technologies (LSI) SAS2008> port 0xe800-0xe8ff mem 0xf7ab0000-0xf7abffff,0xf7ac0000-0xf7afffff irq 48 at device 0.0 numa-domain 1 on pci10 mps0: Firmware: 20.00.07.00, Driver: 21.02.00.00-fbsd mps0: IOCCapabilities: 1285c<ScsiTaskFull,DiagTrace,SnapBuf,EEDP,TransRetry,EventReplay,HostDisc> pcib11: <ACPI PCI-PCI bridge> at device 5.0 numa-domain 1 on pci7 pci11: <ACPI PCI bus> numa-domain 1 on pcib11 pcib12: <ACPI PCI-PCI bridge> at device 7.0 numa-domain 1 on pci7 pci12: <ACPI PCI bus> numa-domain 1 on pcib12 mps1: <Avago Technologies (LSI) SAS2116> port 0xf800-0xf8ff mem 0xf7f3c000-0xf7f3ffff,0xf7f40000-0xf7f7ffff irq 54 at device 0.0 numa-domain 1 on pci12 mps1: Firmware: 20.00.07.00, Driver: 21.02.00.00-fbsd mps1: IOCCapabilities: 1285c<ScsiTaskFull,DiagTrace,SnapBuf,EEDP,TransRetry,EventReplay,HostDisc> pcib13: <ACPI PCI-PCI bridge> at device 9.0 numa-domain 1 on pci7 pci13: <ACPI PCI bus> numa-domain 1 on pcib13 pci7: <base peripheral, interrupt controller> at device 20.0 (no driver attached) pci7: <base peripheral, interrupt controller> at device 20.1 (no driver attached) pci7: <base peripheral, interrupt controller> at device 20.2 (no driver attached) pci7: <base peripheral, interrupt controller> at device 20.3 (no driver attached) ioat8: <TBG IOAT Ch0> mem 0xf79f8000-0xf79fbfff irq 67 at device 22.0 numa-domain 1 on pci7 ioat8: Capabilities: 77<Block_Fill,Move_CRC,DCA,Marker_Skipping,CRC,Page_Break> ioat9: <TBG IOAT Ch1> mem 0xf79f4000-0xf79f7fff irq 68 at device 22.1 numa-domain 1 on pci7 ioat9: Capabilities: 77<Block_Fill,Move_CRC,DCA,Marker_Skipping,CRC,Page_Break> ioat10: <TBG IOAT Ch2> mem 0xf79f0000-0xf79f3fff irq 69 at device 22.2 numa-domain 1 on pci7 ioat10: Capabilities: 77<Block_Fill,Move_CRC,DCA,Marker_Skipping,CRC,Page_Break> ioat11: <TBG IOAT Ch3> mem 0xf79ec000-0xf79effff irq 70 at device 22.3 numa-domain 1 on pci7 ioat11: Capabilities: 77<Block_Fill,Move_CRC,DCA,Marker_Skipping,CRC,Page_Break> ioat12: <TBG IOAT Ch4> mem 0xf79e8000-0xf79ebfff irq 67 at device 22.4 numa-domain 1 on pci7 ioat12: Capabilities: 77<Block_Fill,Move_CRC,DCA,Marker_Skipping,CRC,Page_Break> ioat13: <TBG IOAT Ch5> mem 0xf79e4000-0xf79e7fff irq 68 at device 22.5 numa-domain 1 on pci7 ioat13: Capabilities: 77<Block_Fill,Move_CRC,DCA,Marker_Skipping,CRC,Page_Break> ioat14: <TBG IOAT Ch6> mem 0xf79e0000-0xf79e3fff irq 69 at device 22.6 numa-domain 1 on pci7 ioat14: Capabilities: 77<Block_Fill,Move_CRC,DCA,Marker_Skipping,CRC,Page_Break> ioat15: <TBG IOAT Ch7> mem 0xf79dc000-0xf79dffff irq 70 at device 22.7 numa-domain 1 on pci7 ioat15: Capabilities: 77<Block_Fill,Move_CRC,DCA,Marker_Skipping,CRC,Page_Break> acpi_button0: <Power Button> on acpi0 ipmi0: <IPMI System Interface> port 0xca2-0xca3 on acpi0 ipmi0: KCS mode found at io 0xca2 on acpi uart0: <16550 or compatible> port 0x3f8-0x3ff irq 4 flags 0x10 on acpi0 uart1: <16550 or compatible> port 0x2f8-0x2ff irq 3 on acpi0 orm0: <ISA Option ROMs> at iomem 0xc0000-0xc7fff,0xcb000-0xcbfff pnpid ORM0000 on isa0 atkbdc0: <Keyboard controller (i8042)> at port 0x60,0x64 on isa0 atkbd0: <AT Keyboard> irq 1 on atkbdc0 kbd0 at atkbd0 atkbd0: [GIANT-LOCKED] coretemp0: <CPU On-Die Thermal Sensors> on cpu0 est0: <Enhanced SpeedStep Frequency Control> on cpu0 ZFS filesystem version: 5 ZFS storage pool version: features support (5000) Timecounters tick every 1.000 msec ugen2.1: <Intel UHCI root HUB> at usbus2 ugen5.1: <Intel UHCI root HUB> at usbus5 ugen0.1: <Intel UHCI root HUB> at usbus0 ugen1.1: <Intel UHCI root HUB> at usbus1 ugen4.1: <Intel UHCI root HUB> at usbus4 ugen3.1: <Intel EHCI root HUB> at usbus3 ugen6.1: <Intel UHCI root HUB> at usbus6 ugen7.1: <Intel EHCI root HUB> at usbus7 uhub0: <Intel UHCI root HUB, class 9/0, rev 1.00/1.00, addr 1> on usbus2 uhub1: <Intel UHCI root HUB, class 9/0, rev 1.00/1.00, addr 1> on usbus5 uhub3: <Intel EHCI root HUB, class 9/0, rev 2.00/1.00, addr 1> on usbus7 uhub2: <Intel UHCI root HUB, class 9/0, rev 1.00/1.00, addr 1> on usbus6 uhub6: <Intel UHCI root HUB, class 9/0, rev 1.00/1.00, addr 1> on usbus1 uhub5: <Intel UHCI root HUB, class 9/0, rev 1.00/1.00, addr 1> on usbus0 uhub7: <Intel EHCI root HUB, class 9/0, rev 2.00/1.00, addr 1> on usbus3 uhub4: <Intel UHCI root HUB, class 9/0, rev 1.00/1.00, addr 1> on usbus4 ipmi0: IPMI device rev. 1, firmware rev. 3.05, version 2.0, device support mask 0xbf cd0 at ahcich2 bus 0 scbus2 target 0 lun 0 cd0: <TEAC DVD-ROM DV-28SW R.2A> Removable CD-ROM SCSI device cd0: Serial Number 10051107xxxxxx cd0: 150.000MB/s transfers (SATA 1.x, UDMA5, ATAPI 12bytes, PIO 8192bytes) cd0: Attempt to query device size failed: NOT READY, Medium not present - tray closed ses0 at ahciem0 bus 0 scbus6 target 0 lun 0 ses0: <AHCI SGPIO Enclosure 1.00 0001> SEMB S-E-S 2.00 device ses0: SEMB SES Device ipmi0: Number of channels 2 ipmi0: Attached watchdog ada0 at ahcich0 bus 0 scbus0 target 0 lun 0 ada0: <Samsung SSD 850 PRO 256GB EXM04B6Q> ACS-2 ATA SATA 3.x device ada0: Serial Number S251NXAGBxxxxxx ada0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 512bytes) ada0: Command Queueing enabled ada0: 244198MB (500118192 512 byte sectors) ada0: quirks=0x3<4K,NCQ_TRIM_BROKEN> ch0 at mps0 bus 0 scbus7 target 1 lun 1 ch0: <IBM 3573-TL F.11> Removable Changer SPC-3 SCSI device ch0: Serial Number 00X4U7xxxxxx_LL0 ch0: 600.000MB/s transfers ch0: Command Queueing enabled ch0: 44 slots, 1 drive, 1 picker, 3 portals sa0 at mps0 bus 0 scbus8 target 0 lun 0 sa0: <IBM ULT3580-HH6 H4T3> Removable Sequential Access SPC-4 SCSI device sa0: Serial Number 10WTxxxxxx sa0: 600.000MB/s transfers da0 at mps1 bus 0 scbus8 target 22 lun 0 da0: <HGST HUH728080AL4200 A7JD> Fixed Direct Access SPC-4 SCSI device da0: Serial Number VLxxxxxx da0: 600.000MB/s transfers da0: Command Queueing enabled da0: 7630885MB (1953506646 4096 byte sectors) da1 at mps1 bus 0 scbus8 target 23 lun 0 da1: <HGST HUH728080AL4200 A7JD> Fixed Direct Access SPC-4 SCSI device da1: Serial Number VLxxxxxx da1: 600.000MB/s transfers da1: Command Queueing enabled da1: 7630885MB (1953506646 4096 byte sectors) da2 at mps1 bus 0 scbus8 target 25 lun 0 da2: <HGST HUH728080AL4200 A7JD> Fixed Direct Access SPC-4 SCSI device da2: Serial Number VLxxxxxx da2: 600.000MB/s transfers da2: Command Queueing enabled da2: 7630885MB (1953506646 4096 byte sectors) da3 at mps1 bus 0 scbus8 target 26 lun 0 da3: <HGST HUH728080AL4200 A7JD> Fixed Direct Access SPC-4 SCSI device da3: Serial Number VLxxxxxx da3: 600.000MB/s transfers da3: Command Queueing enabled da3: 7630885MB (1953506646 4096 byte sectors) da4 at mps1 bus 0 scbus8 target 27 lun 0 da4: <HGST HUH728080AL4200 A7JD> Fixed Direct Access SPC-4 SCSI device da4: Serial Number VLxxxxxx da4: 600.000MB/s transfers da4: Command Queueing enabled da4: 7630885MB (1953506646 4096 byte sectors) da5 at mps1 bus 0 scbus8 target 28 lun 0 da5: <HGST HUH728080AL4200 A7JD> Fixed Direct Access SPC-4 SCSI device da5: Serial Number VKxxxxxx da5: 600.000MB/s transfers da5: Command Queueing enabled da5: 7630885MB (1953506646 4096 byte sectors) da6 at mps1 bus 0 scbus8 target 29 lun 0 da6: <HGST HUH728080AL4200 A7JD> Fixed Direct Access SPC-4 SCSI device da6: Serial Number VJxxxxxx da6: 600.000MB/s transfers da6: Command Queueing enabled da6: 7630885MB (1953506646 4096 byte sectors) da7 at mps1 bus 0 scbus8 target 30 lun 0 da7: <HGST HUH728080AL4200 A7JD> Fixed Direct Access SPC-4 SCSI device da7: Serial Number VKxxxxxx da7: 600.000MB/s transfers da7: Command Queueing enabled da7: 7630885MB (1953506646 4096 byte sectors) da8 at mps1 bus 0 scbus8 target 31 lun 0 da8: <HGST HUH728080AL4200 A7JD> Fixed Direct Access SPC-4 SCSI device da8: Serial Number VLxxxxxx da8: 600.000MB/s transfers da8: Command Queueing enabled da8: 7630885MB (1953506646 4096 byte sectors) da9 at mps1 bus 0 scbus8 target 32 lun 0 da9: <HGST HUH728080AL4200 A7JD> Fixed Direct Access SPC-4 SCSI device da9: Serial Number VJxxxxxx da9: 600.000MB/s transfers da9: Command Queueing enabled da9: 7630885MB (1953506646 4096 byte sectors) da10 at mps1 bus 0 scbus8 target 33 lun 0 da10: <HGST HUH728080AL4200 A7JD> Fixed Direct Access SPC-4 SCSI device da10: Serial Number VLxxxxxx da10: 600.000MB/s transfers da10: Command Queueing enabled da10: 7630885MB (1953506646 4096 byte sectors) da11 at mps1 bus 0 scbus8 target 34 lun 0 da11: <HGST HUH728080AL4200 A7JD> Fixed Direct Access SPC-4 SCSI device da11: Serial Number VLxxxxxx da11: 600.000MB/s transfers da11: Command Queueing enabled da11: 7630885MB (1953506646 4096 byte sectors) da12 at mps1 bus 0 scbus8 target 35 lun 0 da12: <HGST HUH728080AL4200 A7JD> Fixed Direct Access SPC-4 SCSI device da12: Serial Number VJxxxxxx da12: 600.000MB/s transfers da12: Command Queueing enabled da12: 7630885MB (1953506646 4096 byte sectors) da13 at mps1 bus 0 scbus8 target 36 lun 0 da13: <HGST HUH728080AL4200 A7JD> Fixed Direct Access SPC-4 SCSI device da13: Serial Number VKxxxxxx da13: 600.000MB/s transfers da13: Command Queueing enabled da13: 7630885MB (1953506646 4096 byte sectors) da14 at mps1 bus 0 scbus8 target 37 lun 0 da14: <HGST HUH728080AL4200 A7JD> Fixed Direct Access SPC-4 SCSI device da14: Serial Number VLxxxxxx da14: 600.000MB/s transfers da14: Command Queueing enabled da14: 7630885MB (1953506646 4096 byte sectors) da15 at mps1 bus 0 scbus8 target 38 lun 0 da15: <HGST HUH728080AL4200 A7JD> Fixed Direct Access SPC-4 SCSI device da15: Serial Number VKxxxxxx da15: 600.000MB/s transfers da15: Command Queueing enabled da15: 7630885MB (1953506646 4096 byte sectors) ipmi0: Establishing power cycle handler uhub5: 2 ports with 2 removable, self powered uhub2: 2 ports with 2 removable, self powered uhub6: 2 ports with 2 removable, self powered uhub1: 2 ports with 2 removable, self powered uhub0: 2 ports with 2 removable, self powered uhub4: 2 ports with 2 removable, self powered uhub7: 6 ports with 6 removable, self powered uhub3: 6 ports with 6 removable, self powered lo0: link state changed to UP ix0: link state changed to UP ugen4.2: <vendor 0x0557 product 0x8021> at usbus4 uhub8 numa-domain 0 on uhub4 uhub8: <vendor 0x0557 product 0x8021, class 9/0, rev 1.10/1.00, addr 2> on usbus4 ugen1.2: <American Megatrends Inc. Virtual Keyboard and Mouse> at usbus1 ukbd0 numa-domain 0 on uhub6 ukbd0: <Keyboard Interface> on usbus1 kbd2 at ukbd0 ums0 numa-domain 0 on uhub6 ums0: <Mouse Interface> on usbus1 ums0: 3 buttons and [XY] coordinates ID=0 uhub8: 4 ports with 4 removable, self powered ugen4.3: <ATEN International Co. Ltd GCS1716 V3.2.319> at usbus4 ukbd1 numa-domain 0 on uhub8 ukbd1: <ATEN International Co. Ltd GCS1716 V3.2.319, class 0/0, rev 1.10/1.00, addr 3> on usbus4 kbd3 at ukbd1 uhid0 numa-domain 0 on uhub8 uhid0: <ATEN International Co. Ltd GCS1716 V3.2.319, class 0/0, rev 1.10/1.00, addr 3> on usbus4 ums1 numa-domain 0 on uhub8 ums1: <ATEN International Co. Ltd GCS1716 V3.2.319, class 0/0, rev 1.10/1.00, addr 3> on usbus4 ums1: 5 buttons and [XYZ] coordinates ID=0 ums2 numa-domain 0 on uhub8 ums2: <ATEN International Co. Ltd GCS1716 V3.2.319, class 0/0, rev 1.10/1.00, addr 3> on usbus4 ums2: 3 buttons and [Z] coordinates ID=0 Trying to mount root from ufs:/dev/ada0p2 [rw]...