Proctor-Silex / Hamilton Beach Vegetable Steamer / Rice Cooker Model C36507 / 36500

We have a Proctor-Silex C36507 Vegetable Steamer / Rice Cooker which we’ve had for ages. Somewhere in our various moves, the manual got separated from the gizmo (personally, I suspect Sue’s piano ate it). For the stuff we usually make, that’s no big deal – dump in water, dump in vegetables, set timer. Tonight we decided to make rice instead of going out in the 100-degree heat to the local Chinese restaurant like we usually do. Of course, the 25-word summary on the back of the unit doesn’t go into details about cooking rice, and the rice bag was similarly un-helpful.

Dinner was delayed a bit while we both scoured the Internet looking for a copy of the manual – after all, the Internet has everything – so why wouldn’t it be available there? Well, you’d probably be surprised – we were. There were a lot of requests from people looking for the same manual, without any luck. I did find a manual on manualslib.com However, it was a not-so-great scan of a nearly 25-year-old FAX of a printer’s draft of the manual, complete with crop marks and various spots of indeterminate origin. That wouldn’t be so bad, except that aside from the front and back covers, the actual manual pages had been shrunk to only occupy around 1/4 of each 8.5″ x 11″ page. Since there’s no room in our small kitchen for a microscope to read that, I did a bit of work in Photoshop to resize, crop, and clean it up. The resulting printout is currently occupying a place of honor in the kitchen (on the side of the fridge).

On the off chance that you’re one of the dozens of people looking for this manual, I’ve put a copy on my web site here (PDF) to save you the trouble. Note that this is the same manual you can get from ManualsLib, just cleaned up. All credit for tracking down this elusive beast goes to them.

Net Neutrality isn’t the only problem

Today (July 12th, 2017) a large number of sites have joined together to raise awareness of the threats to network neutrality. For example, reddit has a pop-over window that slowly types a message beginning with “The internet’s less fun when your favorite sites load slowly, isn’t it?” This is certainly a valid concern, and many people, including myself, have legitimate concerns about how the Internet is regulated. But there are enough sites raising that point, so I’d like to talk about something different – how sites are “shooting themselves in the foot” with slow-loading (and often buggy) page content.

It all starts when a web site decides they want to track visitors for demographics or other purposes. There are a large number of “free”* tools available that will collect the data and let you analyze it in any way you like. Sure, it comes with some hidden Javascript that does things you can’t see, but hey – it is only one thing on a page of otherwise-useful content, right?

Next, the site decides they’d like to help cover the cost of running the site by having a few advertisements. So they add code provided by the advertising platform(s) they’ve selected. So their page now loads a bit slower, and users see ads, but the users will still come for the content, right? And the occasional malware that slips through the advertising platform and gets shown on their site isn’t really their fault, right? They can always blame the advertising platform.

Somewhat later, the site gets an “offer they can’t refuse” to run some “sponsored content”. The page gets even slower and users are having a hard time distinguishing actual content from ads. Clicking on what looks like actual content causes an ad to start playing, or triggers a pop-under, or any one of a number of things that make for an unpleasant user experience.

Once everyone is used to this, things appear to settle down. Complaints from users are infrequent (probably because they can no longer figure out how to contact the site to report problems). Everyone has forgotten how fast the site used to load, except for the users running ad blockers, cookie blockers, script blockers, and so on.

But one day a SSL certificate becomes invalid for some reason (expired, a site was renamed, etc.) and the users are now getting a new annoyance like a pop-up saying that the certificate for btrll.com is invalid. Most users go “huh?” because they weren’t visiting (or at least they thought they weren’t visiting) btrll.com. Clicking the “close” button lasts for all of a second before the pop-up is back, because that ad site is determined to show you that ad. In frustration, the user closes their browser and goes out to buy a newspaper.

By this point, perhaps 5% of the actual page content is from the site the user was intending to visit. The rest is user tracking, advertising, and perhaps a bit of malware. There is a free tool run by WebPagetest.org which will let you analyze any web site to see what it is loading and why it is slow.

Here is the result for the CNN home page:

Now, that’s too small to be able to read, so this is the first part of it (click on this image for a larger view):

The blue line at 21 seconds shows when the page finished loading, although you can see that Javascript from a number of advertising providers continues to run indefinitely.

Now, let’s take a look at Weather Underground. Surely just serving weather information would have far less bloat than CNN, right? Not really:

Now, that’s too small to be able to read, so this is the first part of it (click on this image for a larger view):

It does manage to load in less time than CNN, but it is still pretty awful.

In the spirit of full disclosure, here is the result for this blog page:

Since the entire report fits, I didn’t need to add an unreadably-small overview image.

If you manage a web site, I encourage you to try WebPagetest.org yourself and see why your site is slow. If you’re just a user, you can also use WebPagetest.org to see why the sites you visit are slow. If you’re using add blocking or site blacklisting software while you browse, the list of hosts that are serving advertisements or other unwanted content will probably be useful to you when added to your block / blacklist.

* As they say, “If you aren’t paying for it, then you are the product being sold”.

Soviet PDP-11 Clones

In addition to having a domestic computer design industry (see Pioneers of Soviet Computing [local copy]), the Soviet Union was well-known for copying computer designs from the West. While there were many possible reasons for this, one of the most commonly given ones was the desire to run specific software, also from the West. This could be a particular application program or a whole operating system. Certainly, not having to write software in order to have a deliverable computing product was a huge benefit to the Soviets. While the scale of this cloning program was not entirely understood by the West during the Soviet era (see Total Soviet Computing Power [local copy]) it was well known that a good deal of cloning was going on.

Steal the best
Image courtesy of FSU’s Silicon Zoo

DEC supposedly inscribed the phrase “VAX – When you care enough to steal the very best” on an otherwise-unused area of the die for one of their MicroVAX CPUs. The phrase in the picture reads “СВАКС… Когда вы забатите довольно воровать настоящий лучший” which is horribly mangled Russian, but I think it got the point across.


The highest-performing PDP-11 CPU DEC built was based on the DCJ11 (or J-11, or Jaws), microprocessor. This CPU was the basis for all subsequent DEC PDP-11 products (PDP-11/53, /73, /83, /84, /93 and /94) up until they sold the product line to Mentec, who continued to use the J-11 on their M70 / M71 / M80 / M90 and M100 CPUs. It was not until nearly 4 years had elapsed after Mentec acquired the DEC PDP-11 line that they introduced a new design, the M11, not based on the J-11. This was probably due to the last J-11 chips being manufactured in early 1998, as production was apparently stopped as soon as Compaq acquired DEC.

The J-11 design was not without its problems. It was a joint manufacturing effort of DEC and Harris Semiconductor (Intersil). DEC had previously used the Harris / Intersil 61×0 chips, which implemented the PDP-8 CPU in a microprocessor. They probably weren’t expecting the issues which plagued the J-11 project. In addition to problems with the CPU itself, there were problems with the optional floating-point accelerator chip (designed and built entirely by DEC) and the support chips needed to make the J-11 function in a system. This led to a number of costly recalls by DEC to fix (or conceal) problems. The original distinction between the various PDP-11 systems based on the J-11 was lost as parts (normally the floating-point accelerator chip) were removed and / or the board swapped for a slower 15MHz one in the field to get the systems working reliably. Eventually the J-11 systems became reliable enough that users could have an 18MHz CPU with working floating point. Earlier J-11 chips had speed restrictions (often 15MHz) and did not work with the floating-point chip. The planned optional Commercial Instruction Set (CIS) option was never produced, although you can see where it would have been placed on the bottom side of the CPU.

Certainly not all of the problems were on the Harris side – I’ve successfully run a J-11 at 24MHz on a 3rd-party board. The DEC support chip set was found to be limited to a bit over 18 MHz, which is why DEC did not press Harris particularly hard to meet the 20MHz design goal (for top-binned parts). The part number DCJ11-AE (the -AE suffix indicated the revision level) was the last version produced, the “good one”. Interestingly, the individual chips on the first DCJ11-AE CPUs were revision 1 on the DC334 chip and revision 11 on the DC335. The newest DCJ11-AE I’ve seen (with a module date code of 9820 and chip date codes of 9819) has a revision 4 DC334 chip and a revision 16 DC335 chip. That DCJ11-AE has the Harris logo stamped on the ceramic carrier as well as the individual chips, while a somewhat earlier sample with a 9711 date code has the same revision 4 and 16 chips, but without Harris markings on the ceramic carrier. 9820 is pretty close to the time DEC was acquired by Compaq, so the J-11 hung on to the bitter end, 4 years after DEC sold the rest of the PDP-11 business to Mentec. Apparently there weren’t user-visible changes which would cause the overall CPU revision to change to a DCJ11-AF. Perhaps the changes were to simplify the manufacturing process.

DEC also “shot themselves in the foot” by having one group think the part was solely for DEC’s use in building systems, while another group was trying to get design wins in 3rd-party products. This led to a bizarre situation where if you tried to purchase a J-11 chip by itself from DEC, you got a call from the J-11 product manager (Cathy Berida) who was forced by upper management to ask you what you planned on doing with it before the order would go through. Needless to say, DEC did not get a lot of OEM design wins due to their inconsistent policies regarding the chip. The result of this is that you can purchase case lots of never-used J-11 chips on places like eBay [local copy] if you happen to need a few hundred of them.


DEC M8192

Image courtesy of ElectronTubeStore

[This and all subsequent images in this post are clickable to show a higher-resolution version.]

This is a DEC M8192 module, used in the PDP-11/73 systems. It has an older J-11 CPU and no floating-point accelerator (FPA) chip (the large empty socket below the white J-11 CPU). A manual for it is available from Bitsavers [local copy]. Note that the manual doesn’t show the socket for the FPA, and the sole mention of the FPA is in the description of the internal J-11 CPU registers.


Soviet M8

Image courtesy of eBay user ru.seller

This is a Soviet M8 CPU board. It looks suspiciously like the DEC M8192 board, doesn’t it? Aside from some component substitutions due to limited availability of things (like PLCC sockets for the support chips and the compact 4-LED display) it is pretty much the same board. Note that this board doesn’t even have a socket for the floating-point accelerator chip. The pads are on the board, but there is no socket. This may indicate that the clone parts were created before DEC got the various design issues ironed out. Additionally, the configuration jumpers are soldered in instead of being removable jumpers as they are on the DEC board. The board in the picture is non-functional as some components (mainly bypass capacitors) have been removed for some reason.


Soviet M8 detail

Image courtesy of eBay user ru.seller

Examining the M8 board in more detail, we can see some very interesting things. At the top center of theis image, you can see two chips with the logo “MHS” and the date code “USA8616”. If you’ve never heard of MHS, I’m not surprised. They were a relatively obscure manufacturer of specialty ICs. MHS stands for “Matra Harris Semiconductor” – yup, the same Harris Semiconductor that was making J-11 parts for DEC. They probably had no idea their parts were ending up in the Soviet Union – often, “front” companies would purchase parts in the West and those parts would eventually make their way into the Soviet Union.

The MHS part is a HM3-65747-5 CMOS 4K x 1 static RAM. The DEC M8192 board, oddly enough, does not use the MHS part. Instead, it uses a National Semiconductor NMC2147HN-3 which appears to be a pin-compatible substitute.

Also in this detail image, you can see 5 parts where the manufacturer and part number information has been ground off and “РУ12” written on on them with a marker pen. There is another of these parts outside the area of this detail. On the DEC M8192, these are Fairchild MB8168-55 NMOS 4K x 4 static RAM. “РУ” was the Soviet type designator for a memory chip. One of the chips on the Soviet board does not have its identifying marks removed, and it appears to be an INMOS IMS1420D-55, also an NMOS 4K x 4 static RAM. The mysterious РУ12 is probably К132РУ12 as this page and this page both show that as an interchange part for the IMS1420-55. They’re almost certainly not Soviet-made parts as there would be no need to grind off the original markings in that case.


DEC DCJ11 top

This is the top of a genuine DEC DCJ11-AE. As you can see, there are two large chips mounted to a ceramic carrier. Under the top layer of ceramic you can see some of the leads that connect the two chips to each other and to the pins on the edge of the CPU. There are 4 bypass capacitors for each chip to filter out noise. There is also one SOT-package part (possibly a transistor or 3-terminal regulator) installed, with an unpopulated space for an second one. It possible that the unpopulated space was for a part intended to be used on the underside of the CPU.


DEC DCJ11 bottom

The bottom view of the same part shows the pads which would have held the Commercial Instruction Set if it was ever implemented. You can also see additional leads in an intermediate ceramic layer – the ceramic carrier was a complex, multi-layer affair.


DEC DCJ11 angle

This angle view shows how the individual chips were soldered to the ceramic carrier.


DEC DCJ11 edge

Looking at the edge of the CPU, you can get an idea how thick the ceramic actually is on this part.


Soviet 1831 top

Here is where things get interesting. This is a Soviet 1831 clone of the J-11. The logo on the chips indicates that it was made by the NPO Electronics (НПО Электроника) factory (now VZZP) in Voronezh. Instead of the DC334 and DC335 numbering on the DEC chips, the chips on this board are labeled КН1831ВМ1 and КН1831ВУ1. Wikipedia has a detailed article on Soviet integrated circuit numbering, but it breaks down as follows:

  • К – Commercial / consumer component
  • Н – Ceramic leadless chip carrier (the individual chips on the CPU carrier)
  • 1 – Monolithic integrated circuit
  • 8 – Microprocessor
  • 31 – Number in series
  • ВМ – Microprocessor
  • ВУ – Microcode
  • 1 – Variant

Apparently the two chips had their own code names – Тунгус 1 (Tungus 1) for the КН1831ВУ1 and Теорема 2 (Theorem 2) for the КН1831ВМ1.

You can see the somewhat different method of attaching the pins to the carrier, compared to the DEC CPU. This is due to the thinner carrier as I will discuss below. The same four bypass capacitors are present, but the SOT-package part found on the J-11 is not, although the pads are there. The chips appear to have been hand-soldered onto the carrier. While the carrier in this picture is blue, variants with white and greenish carriers have been photographed. While this part is just labeled M-2-1, other newer samples have been labeled М8К ред4 (M8K red4).


Soviet 1831 bottom

The bottom of the 1831 shows a much simpler method of construction, compared with the DEC J-11. No additional leads are visible and the only marking is “0133”. It is not known what this means – as the chips on the carrier have 8905 and 8904 date codes, it doesn’t make sense that the CPU would have remained unassembled for twelve years. Perhaps it was the date it was installed into or removed from a system?


Soviet 1831 angle

This angle view clearly shows the hand-soldering of the chips to the carrier.


Soviet 1831 edge

The edge view shows how much thinner the carrier is compared to the DEC J-11.


Soviet 1831 chip top

This detail shows the top of an unmounted КН1831ВУ1 chip. It is interesting that while the fabrication method was quite different from the DEC version, they apparently went through a lot of effort to match the packaging exactly. Perhaps they were trying to substitute the КН1831ВУ1 and КН1831ВМ1 chips one at a time onto a DEC package during development? That would not explain why this unusual packaging continued into production, though.


Soviet 1831 chip bottom

The bottom of the unmounted КН1831ВУ1 is pretty boring, having only a stamped “35”. This does not match the date code on the top of the chip, 9111, so perhaps it is an inspection mark.


Soviet 1811 top

This is an 1811 (DEC F-11, PDP-11/23 and /24) clone CPU. Unlike the 1831, this assembly is not a drop-in equivalent to any DEC F-11. It contains КН1811ВМ1, КН1811ВУ1, КН1811ВУ2 and КН1811ВУ3 chips. That would be a processor and 3 microcode ROMs. This is equivalent to a DEC F-11 and a DEC KEF11-AA FPU (Floating Point Unit). Oddly, in the DEC implementation the KTF11-AA MMU (Memory Management Unit) is necessary for using the KEF11-AA as the FPU reuses some of the registers in the MMU. This chip is marked МК1 ред1 (MK1 red1). The logos on the chips show that they were fabricated by NPO Electronics, same as the J-11 clone.


Soviet 1811 bottom

The bottom shows that the CPU is made with a brown ceramic instead of the white ceramic (with blue top coating) used on the 1831. The bottom is marked 8821, which corresponds roughly to the date codes on the individual chips (8808 through 8811). Too faint to be seen clearly is the writing “26-027” across the top of the chip as shown in this picture).


Soviet 1811 angle

An angle view, clearly showing the “MK1 red1” marking.


Soviet 1811 edge

Here you can see that the carrier is also quite thin, similar to the 1831.


Elektronika 89 board

Image courtesy of Soviet Digital Electronics Museum – Sergei Frolov

This is the CPU board from the Elektronika 89 minicomputer. You can see the 1811 CPU, along with the КР1811ВТ1 MMU chip, in the center of the board.


I hope you’ve enjoyed this look at a relatively unexplored (in the West) area of computer history. These parts occasionally show up on eBay where they often sell for inflated prices. Not all of the eBay listings have the parts described correctly, so rely on pictures (as long as they’re not “sample image only”) to see what you’re getting.

St. Peter’s College and the Mouse Balls

In the 1980’s and 1990’s, St. Peter’s College* had a number of labs with PCs for student use. Each lab was a separate room equipped with a relatively large number of identical systems. In this particular case the room was full of Northgate Intel® 386™SX systems. These were purchased as complete systems from Northgate, along with their Omnikey keyboards and “Microsoft mice”. These systems had been in use for some years (replacement of expensive working stuff takes a long time at private colleges). We’d learned earlier on that people would steal anything that “wasn’t nailed down”, so the PCs had metal cables wrapped around table legs and locked to the PC case, and the keyboards and mice had their cables tied onto those metal cables near the PC case with plastic cable ties.

The mice were your basic original Microsoft mice, shiny white plastic, 2 buttons, a roller ball and a 9-pin serial connector. None of that ergometric optical PS/2 wheel and 3-or-more buttons stuff.

We should have known that wasn’t sufficent security, as people had popped random keys off the keyboards every now and then. That wasn’t terribly difficult as they just popped off. Northgate even conveniently supplied a keycap removal tool with each keyboard.

Another thing you should know is that having an offbeat sense of humor** was pretty much a requirement for working in the Academic Computer Center at SPC. This was actually pretty common – you can see some other examples of it in The Jargon File and the original BOFH. The protagonist of this story is Joe, a fellow who looked a lot like Radar from the M*A*S*H TV series, who would often wear a hat with what looked like a fish sticking through it. One of Joe’s jobs was to handle minor wear-and-tear items in the computer labs.

One day, Joe comes downstairs to my office and tells me “Somebody took the balls out of all the mice in the Northgate lab!”, to which I replied “You mean someone castrated them?” He asked me what he should do, and I answered “Call Microsoft and order some new ones.” A few hours later he came back into my office and said he couldn’t get anybody on the phone at Microsoft who knew about Microsoft mice (Microsoft’s Hardware Division was apparently a secret at Microsoft, at least to the people who answered the main phone number at Microsoft). I tell him to keep trying and he eventually comes back and tells me he got the phone number of someone who could help him. Very proudly, he dialed the number from my office on speakerphone so I could hear the exchange:

Microsoft: “Microsoft, this is Ms. X at extension xxxx”
Joe: “Is this the group that handles Microsoft Mouse parts?”
Microsoft: “Yes, how can I help you?”
Joe: “Somebody castrated all our mice!”
Microsoft: <Click>
Joe: “Hello? Anyone there? Hello?”

I told him to wait a few hours and call them back from his office and order the darned mouse balls. He came back and said they agreed to the order, at which point we had to do the song-and-dance to get a purchase order issued (a story for another time).

In due time, a box arrived and Joe went to put the new balls into the mice. He comes back down and says “They don’t fit!”. I asked what he meant and he said they didn’t fit into the housing inside the mouse. I told him to call Microsoft back and ask what was going on. Again, he used the speakerphone in my office:

Microsoft: “Microsoft, this is Ms. X at extension xxxx”
Joe: “My balls are too big!”
Microsoft: <Click>
Joe: “Hello? Anyone there? Hello?”

Deja vu all over again. Eventually he gets a hold of someone who asks where he got them, and it turns out that when Microsoft increased the resolution of their mice, they did it simply by changing the size of the ball. They then fobbed their inventory of the older, lower-resolution mice off on their OEM Windows customers who needed cheap mice to sell with their computers.

Eventually, a package arrives from Microsoft with the correct size mouse balls and Joe installs them, much to the relief of the students who have been crowding into the other PC labs when they needed to use Windows software. This package was somewhat oddly-addressed, being sent to “St. Potato’s College”. Apparently someone at Microsoft shared our oddball sense of humor (or had been “infected” with it after the ball-ordering incidents). We decided to use superglue to glue the access covers for the mouse balls onto the mice to prevent this from happening in the future.

While I don’t have the address label from that second mouse ball shipment, Microsoft continued to use “St. Potato’s College” as the official name on our customer account for a number of years:

Potato?!?!

* The domain name at the time was spc.edu. Later on, they renamed the school to St. Peter’s University. Fortunately, spu.edu (pronounced “spew” or “ess pee-yew”) was already taken by Seattle Pacific University, so they used stpeters.edu. Around the same time, Jersey City State College (jcsc.edu) renamed itself to New Jersey City University. Shouldn’t that be New Jersey Jersey City University, or maybe New Jersey2 City University? Anyway, they went with njcu.edu – I don’t even know how to say that – nijj-koo, maybe?

** We actually developed a “litmus test” for co-workers to see if they would fit in. It was pretty simple – we’d ask someone to imagine Elmer Fudd singing the theme from “The Way We Were”. If they broke out laughing or cracked a smile, they’d be a perfect fit. If they looked puzzled because they didn’t get it, they’d need some on-the-job training in our particular sense of humor. If they didn’t like it, chances were close to 90% they wouldn’t fit in and would leave relatively soon. In case you don’t get it… Mem-wees… Misty watta-color mem-wees of da way we wuhhhhhhhhhhh…

Picking up blogging again

While my blog has been silent for over a year, I’ve been inspired to start posting again. This was mostly because I’ve been relating various anecdotes to different people and many of them have said “you should really write a book about your experiences.” I also visted the Computer History Museum in California during the fall of 2016, and the combination of seeing their collection and going “I’ve worked with one of those” and seeing how much history has been lost made me decide to create some content of my own*.

Upcoming posts in the Computer History category will (mostly) detail my personal experiences with computer hardware and software for over forty years (yikes!). These posts will combine items from my personal collection as well as information I know about them. I will be researching these posts and will add links to external reference sites where I can.

The Personal Recollections category will (mostly) be narratives about my experiences working with the people who work with computers. These will be from my memory, to the best of my ability. In cases where I have posted the story to Usenet or a forum (BBS, DECUServe, etc.) site and they differ in a non-minor way from the version I post here, I will try to provide links to my prior posts. Not all of those sites still exist, though. When I name people, they will either be first names only or will have consented to being mentioned (when you read some of these posts, you’ll know why).

As always, all photographs will be by me, unless otherwise credited.

* Both categories will often have footnotes (like this one). Often, some of the funniest bits will be in the footnotes. You can either click the blue asterisk(s) when you come across them in the main article, or just read your way down to the footnote(s) at the end.

This site is now Flash-free

All of the videos on this site are now encoded using open standards. In addition to not requiring the user to have Flash installed (or click-to-play if installed), this means that users can finally view them on portable devices (tested with both Android and iOS).

The theme I’m using is a heavily-modified version of TypoXP 2 and converting it to be responsive would be a massive undertaking. At some point I will have to migrate to a newer theme, and responsiveness will be one of the selection criteria. Until then, all I can suggest is liberal usage of the pinch-to-zoom feature on your mobile device.

Is no crypto always better than bad crypto?

SSL (Secure Sockets Layer, the code that forms the basis of the https:// in a URL) can use any number of different encryption methods (protocols) and key strengths. While all of the protocols / strengths were presumed to be secure at the time they were designed, faster computers have made “cracking” some of the older protocols practical, or at least potentially practical. Additionally, concerns have been raised that some of the underlying math may have been intentionally weakened by the proponents (for example, NIST and the NSA) of those protocols. Perhaps an underlying flaw in the protocol has been discovered. Due to this, web browsers have been removing support for these older, insecure protocols.

Additionally, even if a protocol is still considered secure, a browser may start enforcing additional requirements for the SSL certificate used with that protocol. “Under the covers” this is a rather different situation, but for the purpose of this discussion I will lump them together, since the average user doesn’t care about the technical differences, only that a service that they used to be able to access no longer works.

In theory, this is a good idea – nobody wants their financial details “sniffed” on the way between you and your bank. However, the browser authors have decided that all usage of those older protocols is bad and should be prohibited. They make no distinction between a conversation between you and your bank vs. a conversation between you and another site (which could be a web server, UPS – battery backup, a water heater, or even a light bulb!) in your house or company. Instead, they force you to disable all encryption and communicate “in the clear”.

To add to the complexity, each browser does things in a different way. And the way a given browser handles a particular situation can change depending on the version of the browser. That isn’t too bad for Internet Explorer, which doesn’t change that often. Two other browsers that I use (Mozilla Firefox and Google Chrome) seem to release new versions almost weekly. In addition, the behavior of a browser may change depending on what operating system it is running under. Browsers also behave differently depending on when the host at the other end of the connection obtained its security certificate. A certificate issued on December 31st, 2015 at 23:59:59 is treated differently than one issued one second later on January 1st, 2016 at 00:00:00.

In the following discussion, the terms “site” and “device” are generally interchangeable. I sometimes use the term “device” to refer to the system the browser is attempting to connect to. “Site” might be a more accurate term, but for many users a “site” implies a sophisticated system such as an online store, while an intelligent light bulb is more a “device” than a “site”.

In a perfect world, people could just deal with the browser blocking issue by installing new software and / or certificates on all of the devices they administer. Sure, that would be a lot of work (here at home, I have several dozen devices with SSL certificates and in my day job, I have many hundreds of devices) and possibly expense (the companies that sell the certificates don’t always allow users to request updated certificates for free, and updated software to handle the new protocol may not be free – for example, Cisco requires a paid support contract to download updated software). However, it is not that “easy” – any given device may not have new software available, or the new software still doesn’t handle some of the latest protocols.

This leads to an unfortunate game of “whack-a-mole”, where a browser will change its behavior, a company will implement new software to deal with that new behavior, but by the time the software has gone through testing and is released, the browser has changed its behavior again and the updated software is useless. A number of vendors have just given up supporting their older products because of this – they have finite resources and they choose to allocate them to new products.

The browser authors seem to feel that this is just fine and that users should either turn encryption off or throw away the device and buy a new one. Since the “device” is often a management function embedded in an expensive piece of hardware, that simply isn’t practical. A home user may not feel that replacing a working device is necessary and a business likely won’t replace a device until the end of its depreciation cycle (often 3 or 5 years).

This strikes me as a very poor way for browsers to deal with the situation. Instead of a binary good / bad decision which the user cannot override, it seems to me that a more nuanced approach would be beneficial. If browsers allowed continued usage of these “obsolete” protocols in certain limited cases, I think the situation would be better.

First, I agree with the current browser behavior when dealing with “Extended Validation” sites. These are sites that display a (usually) green indication with the verified company name in the browser’s address bar. In order to purchase an EV certificate, the site needs to prove that they are who they say they are. For example, your bank almost certainly uses an EV certificate. Users should expect that sites with EV certificates are using secure methods to protect connections. If a site with an EV certificate is using an obsolete protocol, something is definitely wrong at that site and the connection should not be allowed.

Second, the current behavior is OK when dealing with well-known sites (for example, amazon.com). This is a little more difficult for browsers to deal with, as they would need to keep a list of sites as well as deciding on criteria for including a site on that list. However, there already is a “master list” of sites which is shared between various browsers – it is called the HSTS Preload list. It could be used for this purpose.

Now we get to the heart of the matter – how to deal with non-EV, non-well-known sites. Instead of refusing to allow access to a site which uses an insecure protocol, a browser could:

  • Display a warning box the first time a site is accessed via an insecure protocol and let the user choose whether or not to proceed.
  • Re-display the warning after a reasonable period of time (perhaps 30 days) and ask the user to re-confirm that they want to use the insecure protocol to access the site.
  • On each page, indicate that the page is using an insecure protocol. This could be done by displaying the URL in the address bar on a red background or similar. Google Chrome does something similar with its red strikethrough on top of the https:// in the address bar. Unfortunately, in most cases Chrome will simply refuse to access a site it deems insecure.
  • NOT require dismissing a warning each time the user accesses the site.
  • NOT require a non-standard way of specifying the site URL in the address bar, bookmarks, etc.

Security experts will probably be thinking “But… That’s insecure!” It certainly is, but is it less secure than using no encryption at all (which is what the browsers are currently forcing users to do)? I don’t think so. In many cases, both the user and the site they are connecting to are on the same network, perhaps isolated from the larger Internet. For example, most devices here are only accessible from the local network – they are firewalled from the outside world.

Technical note: I am only talking about insecure protocols in this post. There is a different issue of bugs (problems) in some particular implementation of SSL – for example, OpenSSL. However, those problems can usually be fixed on the server side by updating to a newer SSL implementation version and generally do not remove protocols as part of fixing the bug. My post is focused on servers that are too old and / or cannot be updated for some reason, which is a completely different issue from server implementation bugs.

What do you think? I’d like to see comments from end users and security experts – feel free to try to shoot holes in my argument. I’d love to see comments from browser authors, too.

This web site is moving to a new URL

After 20+ years at tmk.com, I’ve sold the domain. The new home for all my stuff is glaver.org. Please update your bookmarks accordingly, as tmk.com is only guaranteed to work until the end of January, 2016.

If you’re wondering what a glaver is, click on the “So what’s a glaver anyway?” link in the “INFO” section on the right-hand panel.

Edited to correct a global search-and-replace that clobbered the tmk.com name.

Brother Printer Upgrade Follies

“Well, I’ve been to one world fair, a picnic, and a rodeo, and that’s the stupidest thing I ever heard…”
— Major Kong, Dr. Strangelove

That pretty much sums up my feelings about the firmware update “procedure” Brother provides for their printers. Some time ago I purchased a Brother HL-6180DW to replace an aging LaserJet 2200DN which had decided to either feed multiple sheets or no sheets from the paper tray.

I have no issues with the HL-6180DW as a printer – it has worked fine for over a year, does everything I ask it to, and successfully pretends to be the LaserJet 2200DN that it replaced so I didn’t have to update any drivers. However, I went to reconfigure it the other day to change its hostname and was greated by the dreaded https strikethrough in Google Chrome (the “Your connection is using an obsolete cipher suite” error):

“No problem,” I thought to myself “I’ll just download the latest printer firmware.” I discovered that it is nowhere near that simple.

The first thing I did was download the latest updater from the Brother support site. Running the updater produced an un-helpful “Cannot find a machine to update.” error. Searching on the support site, this is apparently because I did not have the Brother printer driver installed. Of course I don’t – the whole purpose of this printer is to emulate printers from other manufacturers so people don’t have to install drivers when replacing the printer.

I then downloaded the printer driver from the Brother support site and ran it. It self-unpacked into a directory tree which contained no documentation. Fortunately, there was only one .exe. Unfortunately, running it appeared to have no effect other than popping up the Windows “Do you want to let this program make changes to your computer” alert box. Back to the Brother support site, where this support document bizarrely states:

“Case A: For users who connect the Brother machine to their computer using a WSD or TCP/IP port

Connect your computer to the Internet.
Connect the Brother machine to your computer with a USB cable.
The driver will be installed automatically.”

So, in order to install a network printer driver I don’t want, I have to find a USB cable and connect the printer to a PC via a USB port? That is downright bizarre… Armed with a USB cable, I do that and lo and behold, a new printer shows up which claims to be the Brother, attached via USB.

Back to the firmware update utility. Hooray! My printer is detected, and after agreeing that Brother can collect lots of information I don’t really want to give them, I finally get to click on a button to start the firmware update. After a long pause, it tells me that it cannot access the printer (which it detected just fine a few screens back). It tells me that I should check my Internet connection, disable the firewall, sacrifice a chicken, and try again. I proceed to:

  • Disable Windows firewall on my PC
  • Disable the Cisco firewall protecting my network
  • Disable IP security on the printer
  • disable IPv6 on the printer
  • Disable jumbo frames on the printer

None of which has any effect whatsoever.

After more flailing around, I decide on a desperate measure – I will change the printer port from USB to TCP/IP in the printer properties. A miracle – running the update utility produces a request for the printer’s management password, after sending my personal data Yet Again to Brother (or is that Big Brother?). After an extended period of watching the progress bar move at a varying rate (and jump from 80-odd percent complete to 100% complete), the update has finished!

After making sure I can still print from the other computers who still think they’re talking to a LaserJet 2200DN, I go back into the PC I used for the updating and re-enable Windows Firewall. Then I re-enable the Cisco firewall protecting my Internet connection. Lastly, I restore all the settings that I changed on the printer.

“All is as it was before…”
— Guardian of Forever, Star Trek

Back to Chrome to make sure this fixes the https strikethrough… no such luck. Hours wasted for no gain.

I have NO IDEA why Brother thinks this is a good idea. Maybe they’re paranoid about people getting access to the firmware images (although anyone with access to the network and a copy of Wireshark could capture it “on the fly”). The update utility messages could be vastly improved, instead of the “Doh!” (Homer Simpson) that it does now. The support documentation could also be improved to actually explain what the utility needs in order to update the firmware.

Of course, my decade-old HP LaserJet 9000DTN came with an add-in network card which has a simple “Download firmware update from HP” button on its web management page (which, amazingly, still works despite HP having rearranged their web site multiple times since that card was new).

In a corporate network where I would have to get IT support involved in disabling my PC’s firewall, or (good luck!) disabling the corporate firewall in order to satisfy the Brother update utility, I think people would simply give up and not update the printer firmware.

And don’t think you can cheat and tell Brother you’re running Linux – the downloads for Linux don’t include a method to update the firmware.

De-bloating the Dell Server Update Utility – Yet again

Dell has released the 2015.09 SUU, and it continues to expand:

10/08/2015 08:13 PM 15,559,686,144 SUU-32_15.09.200.74.ISO

If I was growing at the same rate, I’d no longer fit through my front door. The SUU has grown by over 2.5GB since the previous release, only 2 months ago.

Even after de-bloating we’re left with a resulting size of 7,082,702,599 bytes, which is well into double-layer DVD territory. If the SUU continues its current rate of expansion, the next update may not even fit on a double-layer DVD.