We move a lot of product through our warehouse. In addition to our own asset disposition clients, we are the place where many other firms go to sell the retired hardware they removed from your business. Packages come in daily for us to test and repair before we accept them.
The same team that packs and ships our equipment in the afternoon unpack what others send to us in the morning. They are well seasoned, and not surprised by anything, so when the package in the photo to the right came in they thought nothing of it. “We’ve seen it all,” said our shipping manager, Aaron.
It is difficult to express exactly the difference packing can make in the confidence a customer gets. Our standards are high. Everything is shipped in new boxes with fresh packing that has not already suffered stress and compression. Items that do not have manufacturer’s packaging and outer boxes are first nested in custom contoured closed cell foam. Large, heavy boxes are strapped on to pallets with nylon or metal banding. Corner reinforcements are fitted to the exterior of the boxes to assure the banding itself does not pierce or damage the boxes. After the packages are secured, the entire pallet is shrink wrapped then properly labeled on all sides. Although it sounds involved, our staff are so experienced they can do it in just a few minutes.
Once in a while, someone will wonder if our standards are overkill, maybe more than is required. Then we look at our record of less than one package damaged in transit per year. We remind ourselves that we do not have customers complaining of careless handling. Occasionally, when a package like this arrives, we know we are doing the right thing.
When you need new or replacement hardware, we will have the items you need, and they will arrive at your door in the same condition as when they left our warehouse.
“You need preparing, but most of all, you need constant, never ceasing vigilance.”
—Alastor ‘mad-eye’ Moody
In April, when Rebekah Brown wrote about the release of the NSA hacking tools, she said it was not time, yet, to build a bunker. While the jury is still out about the bunker, in the same Rapid7.com blog she correctly predicted, “It will not be long before we will start to see more widespread attacks using these tools.” Less than a month, it turns out.
It is easy to brush off the over-reported drama of the WannaCry attack, but there are a few lessons worth our time. For example, by now everyone has heard that the attack used EternalBlue, one of the vulnerabilities stolen from the NSA and released to the world. What no one is telling you is that it was just one of several NSA hacking tools revealed in the data dump. Along with EternalBlue are EternalSynergy, EternalRomance, EternalChampion, EmeraldThread, EskimoRoll, EducatedScholar, and EclipsedWing. The NSA have a lot of time on their hands.
Theoretically, those were all patched by the Microsoft release in March and then for older systems on May 13, 2017. The May 13 patch was in reaction to the WannaCry attack, not an indication regular updates will continue for those older systems. This raises the question of just who is responsible for the weakness. Clearly, the ransomware hackers are ultimately to blame, but there are, and always will be, bad actors. If you leave your car doors unlocked there’s a good chance someone will steal your Ray-Bans. Who is ultimately responsible for locking the software doors?
The knee-jerk answer is Microsoft. All over the Internet this week, it’s open season on Seattle. Microsoft patched the known vulnerability in March for all systems they currently support. Administrators and users running currently supported systems, who did not install the updates, have no one to blame but themselves. It is clear we have moved past the era when updates, particularly security updates, are optional.
Lesson One: Apply Updates.
A significant number of the computers affected were older models that Microsoft no longer supports. Should Microsoft support them? Federal law only requires automakers to continue to have parts available for a car within the warranty period — the longest interval of eight years for emissions parts. Microsoft replaced XP with Vista in 2007, so XP installations are 10 to 16 years old. Just how long is a software developer responsible for old software? Moreover, this attack was proliferated through fake emails. Is Microsoft responsible when your staff click on an email promising “This kitten will make you cry.”
People clinging to XP know they are working on borrowed time. Similarly, three or four years ago, tech administrators were reading articles that recommended migrating away from Windows Server 2003. At what point does holding on to an unsupported operating system become the end user’s responsibility. A few weeks ago, in this blog, I wrote that four, five, and even six year-old servers are still viable, but retaining a ten year-old operating system is clearly an at-your-own-risk proposal.
More than 230,000 computer users in 150 countries take the risk. That is the estimate of units attacked by WannaCry. The 230,000 does not include the countless XP, server 2003, and other older builds that were not victims of the ransomware because their users didn’t open the phishing emails. The ubiquitous world map of blue dots showing attack localization is educational. You might expect that the older and non-updated systems would be concentrated in third world countries, but the map shows a distinct cluster in our Pacific Northwest, the very home of Microsoft. The British Health Service made the news, but ask anyone in the US medical profession and they will tell you there are scanners, imagers, and other stand-alone devices, used every day, running a Windows XP interface. Brazil’s Social Security System, German Railways, Spain’s Telefonica, French automaker, Renault were all victims of the attack, all running old software.
Lesson Two: People, businesses, and governments do not replace old equipment.
Given the realities that people are not prompt with updates, and that computers can stay in service and continue to perform years beyond end-of-support, Cyber security is a worldwide problem. EternalBlue and the seven other weapons stockpiled by the NSA are dangerous. When the NSA let them slip away, they were irresponsible at best. Regardless of your position on the world order and immigration, cyber weapons are a world concern. Unlike nuclear bombs and mustard gas, cyber weapons are easy to deploy, and have no localized restrictions. It takes a missile or plane to deliver a bomb. A guy in his pajamas with a computer and a bad attitude can launch a cyber-attack.
Lesson Three: Governments building cyber weapons must protect them as they do warheads.
It is unlikely that spy agencies (the USA is not alone) will give up their cyber tools. Microsoft and other software vendors can only be expected to support systems for a reasonable interval. People will never be fully compliant with updates, even when provided. Lastly, computers running old operating systems will not disappear. Given these absolutes, the best solution is still common sense and personal attention. These attacks spread through phishing emails that a single user in the system opened. The people on the front lines, those tricked by the promise of foreign payouts, fake PayPal invoices, and hot Russian brides are the gateway for attacks.
Lesson Four: Don’t be that guy.
You don’t need a bunker. Just be vigilant. Do not open emails, especially attachments, from people you don’t know. Be suspicious of anything that is not in your normal email routine. To Paraphrase Smokey the Bear: “Only you can prevent ransomware.”
As it has since 1976, Frontier Computer can provide IT hardware and enterprise computing solutions. Our expert logistics team can even deliver to your bunker.
If you spend any time around IT people you might think that in-house data centers and servers are as rare as pay phones. Like many things in popular culture, however, the reality is different from the perception. In the most recent study, released May 1, 2017, the Uptime Institute found, “the percentage of workloads residing in enterprise-owned/operated data centers has remained stable at 65 percent since 2014.” It would be easy to latch on to the 65% number and miss perhaps the most important element in their summary finding, the 65% has remained stable.
In weather reporting, “partly coludy” means more sun than clouds.
Looking deeper into the Uptime report shows details that do not quite mesh with current buzz that everything is moving to the cloud. Of the 35% of IT work that wasn’t being done in owner/operated data centers 22% of operations were still in data centers, but they were either co-located or multi-tenant centers. As of May 1, 2017, only 13% of Enterprise computing worldwide is in the cloud.
In the forty-five minute webinar on the study, Matt Stansberry, senior director for Uptime Institute noted that over the last five years of their study the industry percentage of in-house IT data centers has only changed within the margin of error. In short, there isn’t a decline. While he noted that about a third of respondents did plan to deploy some workloads in the cloud in the next year, Stansberry said that 50% of cloud deployment is for new computing capacity and growth, much of it by the large IT users like Netflix, Amazon, and Microsoft.
Given the industry buzz supporting cloud computing, IT managers would be negligent not to explore the options. Overall data center footprints are shrinking, but cloud deployments are only one aspect. Increased server performance and technology are shrinking hardware. In addition, a significant portion characterized as a move to the cloud is in fact virtualization implemented on owned servers. Small server installations in remote offices have been replaced, but with a virtual presence in the company’s own data center. There is new growth in the cloud, but not necessarily a rapid revolution.
As with all technology, things will change, but the just released report of data collected in early 2017, from 1,000 IT professionals, suggests that enterprise-owned/operated data centers are the rule rather than the exception, and that redeployment in the cloud is still a small part of the industry. At the end of his presentation of Uptime Institute’s findings Matt Stansberry summarized, “Enterprise-owned data centers have remained a central component. We urge data center and IT professionals to focus on the business aspects of running their IT foundation.”
As it has since 1976, Frontier Computer can provide IT hardware and enterprise computing solutions. We have experts who can identify the best way to deploy your workload — in-house, shared, or cloud — and help you transition when it’s time to make a change.
If you Google “IT Product Life Cycle,” you’ll get an MBAs worth of information about how manufactures can track the life of a product. The definition of “product” they are using, however, is better described as model. The business school life cycle is concept, design, marketing, blah, blah, blah. The Harvard and Wharton types break it into introduction, growth, maturity, and saturation. It is mostly a market assessment, not really a product evaluation. In fact, the day someone buys the actual product it falls off the retailer’s cycle. The object of their whole plan is the sale, and where their interest ends.
For our purpose here, we are not interested in how long Ford can continue to sell cars called Taurus — seems to be forever — or how long IBM can call it Power 8, before they have to bust out the 9. The real question is what happens once a specific piece of IT hardware hits the streets, the real beginning of the IT product Circle of Life. For IT Hardware, the circle starts with the installation of the new piece of hardware at its first work site. Like many of us at our first jobs, the future is not bright for Mr. Server; in a few years he’ll probably be looking for a new job.
Like many others, Brett Nordquist at Recovery Zone, Storagecraft.com’s blog, recommends replacing a server every three years. He cites the escalating cost of support as his major reason. “According to IDC, starting in the fourth year, support costs increase about 40%. By year five, you’re staring at a 200% increase. Hold on to the server for seven years and support costs hit a mind boggling 400%.” Sadly, his is a common view and rationale, despite being flawed. The reasoning is based on OEM warranty, and using OEM support. The original manufacturer has a stake in keeping the cost high for support years four, five, and beyond. High maintenance costs will make new hardware look more appealing. Other than as a market driver for the Mr. Server’s creator, the three-year interval has no basis in reality.
The escalating costs of OEM service are not the only support option. We know more than a little bit about this at Frontier because our FrontierCARE provides high levels of support for IT hardware including 24/7 service with as little as two-hour response times. There are other non-OEM service options beyond FrontierCARE, and when the primary function of the company providing support isn’t to sell new hardware the costs decrease significantly. FrontierCARE runs about 50% of OEM support contracts. It could also be argued that the most knowledgeable support happens at the two-year plus mark. By then the flaws and quirks of any system are widely known, expected, and quickly addressed.
There are other reasons why Mr. Server is ready to stay on the job past 36 months. Omitting the rare piece of enterprise IT hardware with design flaws, servers are sturdy things. These are not laptops from Best Buy. They are expected to operate 24 hours a day, all year, without rest. When installed correctly in the proper environment, servers can take it. There will be component failure — even new components fail sometimes — but parts are available. With the exception of limited production hardware, like DC servers or certain Military specified hardware, there is a brisk secondary market, and inventory of IT parts. Again, this is something we know well at Frontier. Our warehouse holds more than 100,000 product SKUs, and only a fraction of them are currently available from the original manufacturer. This first stage in the circle of IT life does not have to be only three years.
Whether stage one, initial installation from new, lasts three years, five years, or only 18 months, it’s just an arc on the circle of IT life. Frontier Computer Corp. is where the second stage of life starts for Mr. Server and many of his contemporaries. Frontier was founded 40 years ago buying various equipment that was coming off lease. Today, we are where servers go when they are released from their first assignment.
When the hardware comes into Frontier, our technicians fully test it, replace anything that isn’t working properly, clean it up, and send it its next job. For many of those servers, stage two will be the same as stage one. Often servers coming from a data center replacing their hardware will go into another data center that is being installed. Some of them will replace another identical unit that has failed in an installment as old as the data center that was decommissioned. One IT administrator’s cast-off is another’s solution. This arc in the circle sometimes repeats itself.
What might be old technology for major corporation in a first world country, can be the perfect solution for a start-up, or small legacy company in other parts of the globe. Frontier ships fully configured, tested systems worldwide. Sometimes those systems are going on their third deployment, further extending the life of a product. Not every unit makes it to even a second deployment. It depends on how long the server remains at its first installation, the server’s original configuration, and how well it fares in testing to determine when it moves from functioning as a whole server to becoming a donor for parts.
Even when a Mr. Server is taken out of service as a whole unit, it isn’t over. At Frontier, our techs go to work turning a retired server into a parts warehouse. Components are tested, and when they pass, entered into inventory. For most equipment, there is not much that doesn’t become a spare. Processors, power supplies, drives, connecters, memory, and a long list of other components can all be useful. Even case parts, like bezels can become spares. The younger a server is when retired, the more likely it is to yield parts. If there are bits here and there with no utility left, Frontier recycles them into raw materials. Only those bits move to the end of the circle.
The Circle Can Be Unbroken
At the parts level Mr. Server completes the circle. One of the reasons IT managers replace servers with newer models is fear for a lack of available spares. When a server or any major assembly gets decommissioned into parts, ironically, those parts are the resource that keep other similar models in service. FrontierCARE clients enjoy a flow of ready spares even when parts are scarce at the manufacturer level, a significant advantage over other service options.
The real end of life for Mr. Server happens many, many years later, only when all of the original models are uninstalled, and all of the components lose their value as spare parts. The value of spare parts, however, can even outlive a specific model. When a new version is introduced, it doesn’t necessarily mean all new components. Mr. Server’s connectors, heat sinks, and even case components may well have the same part number as the next generation. Alas, eventually every vestige of Mr. Server will disappear from data centers worldwide. When that time comes, at Frontier every useful resource that once was Mr. Server is recycled into raw materials, to begin the circle again.
The manufacturer may limit the product life cycle of a specific model using designations like end-of-life, and end-of-service, but those terms have no resemblance to the actual working life of one specific unit. Enterprise quality IT hardware has a long life, even with a short initial installation. How long IT hardware remains useful to the original purchaser may has more to do with the maintenance choices of the user than the hardware itself. Those choosing only OEM support will find their assets needing frequent replacement. Those who choose a different maintenance option, like FrontierCARE, can realize benefits on their investment for much longer, usually with less down time, and less total expense.
When you have IT assets to manage you can count on FrontierCARE to help you get the most out of your investment. We can keep them up and running, and when they are truly at the end of their useful lives, assure that they are properly recycled to begin the circle once more.
Since the Sony Xperia Z and Samsung Galaxy S5, about 2013, common smartphones have been IP67, “water resistant.” Now Apple has built the iPhone 7 to the same specification, and once Apple jumps on the train more people start asking questions. The phones are rated IP67, but what exactly does that mean? It is a tight seal, but not something you want to take snorkeling.
Technically, IP67 means the phones, as they come out of the box, can take immersion in 1 meter of water for 30 minutes. The certification states that the seal will prevent “ingress of water in harmful quantity.” It can leak, but not enough to cause damage, a vague standard for something as sensitive as a smartphone. Two meters would mean more pressure and all bets are off. As the phones age, spend time in your pocket, and get dropped the seals will lose integrity. The Samsung Galaxy S7 and S8 are now IP68 rated, which protects against immersion more than 1 meter, but the standard does not specify a depth benchmark. Samsung says the S8 is good to 1.5 meters; let’s call it 4-1/2 feet. In other words, it will not end well if you jump into the deep end of the pool with your S8.
IP standards do not apply well to handheld objects that are subject to daily bumps, occasionally being sat upon, and just general abuse. The ratings, however, are a sound measurement for electronic equipment installed in one place even when the conditions at that location change drastically. The IP (for Ingress Protection) standards were developed by the IEC, International Electrotechnical Commission. The “IP” is followed by two digits. The first digit represents penetration by solids and the second digit is for liquids. It helps to understand how the ratings work.
It takes an IP3x rating to keep out objects greater than 2.5mm, which means you could still poke a flat blade screwdriver into the sensitive parts. IP4x will keep out most chunks, but sand and dust are going right in. IP5x is dust “protected,” which means dust can still get in, but will not “enter in sufficient quantity to interfere with satisfactory operation.” If it strikes you that words like sufficient and satisfactory are less than precise, you are starting to understand the situation. Finally, the IP6x rating draws a line, “no ingress of dust.” Yet, when I open my 3-year-old, IP67-rated Samsung S5, there’s plenty of pocket lint in there. IP ratings do not account for pockets.
The second IP number is for liquids, which is where the rubber meets the road for electronic equipment. An IPx2 will keep out dripping water. The IPx3 rating keeps out spraying water, and IPx4 keeps out splashing water. The distinction between 3 and 4 seems to be that splashing can come from any angle. The ratings get serious at IPx5, which means, “Water jets projected by a nozzle (6.3mm) against the enclosure from any direction shall have no harmful effects.” For most products mounted outdoors IPx5 covers rain, even in wind. IPx6 protects against “water projected in powerful jets (12.5mm).” The powerful jets protection is the sort of seal that will keep out storms up to low-grade hurricanes. You can take IPx7 devices swimming in very shallow water. The most liquid resistant is the IPx8, but the difference is just that it goes beyond the IPx7’s 1 meter immersion rating, but how much beyond is up to the manufacturer to specify.
In all of these ratings fresh water is used in testing, simulating rain and conditions in nature. It should not be assumed that other liquids, which may be more volatile than water will be prevented from entering an enclosure. For industrial installations with chemicals and other liquids, specific testing is the only way to confirm protection. The IP standards for solids and liquids ingress protection are only tested with new equipment with full seals intact. It is up to manufacturers to take testing beyond the basic IP standard.
How do IP ratings apply in actual installations?
Pepwave houses several of their products in IP certified enclosures, with two levels of certification. The AP One Flex Wireless Access Point, MAX BR1 IP55 Cellular Router, and MAX BR2 IP55 Cellular Router are in all-weather, plastic, IP55 rated enclosures. These products are widely used outdoors and give years of service in all-weather applications and temperatures. In all but hurricane force wind, they will withstand rain and dust.
Pepwave’s iconic, industrial level, all metal IP67 enclosure houses the AP Pro line of Access Points and the MAX HD2 IP67 Dual Cellular Router. For applications anywhere but in standing water these IP67 enclosure can fight off weather, storms, high winds and temperature extremes. The Pepwave AP Pro routers were used to bring the Longboat Key community Wi-Fi for 212 homes. Weather conditions do not get more severe than on a barrier island off the coast of Florida.
IP standards tell a great deal about water and particle resistance but do not address other factors that can effect electronic performance. Pepwave goes beyond the IP standard with their own aggressive testing. Peplink maintains their own Faraday chambers and tests every product for performance in thermal extremes. The products housed in the IP55 and IP67 enclosures are all tested and continue to perform over several days at temperatures as low as -40°F and as high as 149°F. The AP Pro Access Points and MAX HD2 IP67 are also certified for RF Port Lighting Immunity to ITU-T K.20 (+/- 1.5 kV) and EN 61000 Electromagnetic Compatibility.
Even the Peplink products not intended for outdoor use undergo rigorous testing for extremes. The entire Pepwave MAX BR and HD lines of cellular routers are tested and will perform in temperatures from -40°F to 149°F for installation in remote locations where heating and cooling are not available. The Pepwave MAX Transit line can handle both temperature extremes and significant vibrations. The MAX Transit and MAX Transit Duo cellular routers are certified for Shock and Vibration Resistance, Railway Applications, Electronic Equipment used on Rolling Stock, and Electromagnetic Compatibility.
While there is no certainty that your smartphone – Samsung or Apple – will withstand a jump in the lake, you can count on Peplink and Pepwave devices to perform up to and beyond their IP rating. Frontier Computer can help you match the right Peplink or Pepwave products to both your application and the conditions at the installation site.
As it has since 1976, Frontier Computer can provide IT hardware and enterprise computing solutions. We have experts who can identify the best tools to maintain your connections in any weather.
Despite the high quality signal of a Peplink router, the best way to maximize connection speeds and throughput are with the right antenna. Recently our colleagues in The Netherlands did a simple test of the same Peplink hardware with three different antennas and found that the right antenna can double connection speed on the same network, with all other connections the same.
With the standard 2-dBi antennas included with the Peplink device, they measured 43.58 Mbps download and 17.09 Mbps upload speeds, about what was expected from the device. When they changed to a high gain omnidirectional antenna the speeds more than doubled to 87.18 Mbps downstream and 45.25 Mbps upstream. With a directional high gain antenna, the download speeds increased slightly more to 105.95 Mbps, although upload speeds remained the same, perhaps limited by the connection itself.
Omni v Directional
It is easy to look at these results and assume that a directional antenna will be the best choice, but that isn’t always true. A directional antenna requires a direct line-of-sight, and must be pointed accurately between the two connected devices. Directional antennas also have a narrower reception window. Directional antennas are not well suited to applications where the receiving device is highly mobile. However, when the signal needs to travel the longest distances, the extra boost of a directional antenna can make a difference.
Omnidirectional antennas spread their signal across a wider area. The wider signal will have less concentration in one area, but allows easier placement. Omnidirectional antennas can be mounted with less regard to the receiving device, and are significantly better when connecting to multiple devices in different areas. While omnidirectional antennas will spread a signal in all directions, it is still important to locate the antenna in the center of the sending area. An omnidirectional antenna mounted in the corner of a building will be less effective in the opposite corner than the same antenna mounted centrally, to spread the signal evenly throughout the building.
“When you look at the results of the tests, it becomes clear that an antenna can be a value-adding product to double your signal strength and data speed.”
If you find yourself at a party with a person who will not stop talking, try asking, “What do you know about this ‘Internet of Things?’” Most people will just stare back, confused. For all the buzz, there is not much clarity. The Internet of Things, or IoT for us insiders, is really more of a concept than a technology. It isn’t something you can buy, invest in, or hold in your hand. The term itself was coined in 1985, six years before there was a World Wide Web and back when the word Internet just meant a network between computers, the same way Interstate means a highway connecting two states. It seems impossible that when Peter T. Lewis used the phrase at an FCC conference he had any idea that it would take on its current meaning.
So what is “The Internet of Things?”
The short answer is that it is a concept. The longer answer is that it is the idea of connecting things that we have not previously thought of as requiring a connection. We always understood that a communications device, like the telephone or telegraph, needed a network, they simply did not exist without one. Other resources, however, have not always had a network. Water, for example, was transported in containers before there was a system of pipes to carry it everywhere. Now we cannot imagine water that doesn’t flow right out of the faucet. Before the electrical grid, energy was unleashed on site by heating water with burning coal or wood and using the power of steam to generate electricity where it was needed. Similarly, we have countless tools in our lives that have never required a connection to each other. Those are the “Things” in IoT.
For lay people those Things are mundane. If you hear people using “smart” as an adjective, they are describing a device on the Internet of Things. It’s hard to imagine how important it is to have a smart refrigerator, and there have been movies made about smart houses rebelling (the smart house in the television series Eureka is prone to jealousy). However, there are significant IoT technologies beyond being able to place an Amazon order from any room in your house. For example, in the past during power overloads, usually on hot days when air conditioners are all running at full tilt, the only way to avert disaster was with rolling blackouts, where power was selectively cut to various regions in sequence. With smart air conditioners, during peak loads the cooling devices could still operate, but at higher temperature to assure that no one user is without power. While it sounds like Big Brother controlling our lives, it is exactly the same concept as a rolling blackout only without the 30-minute intervals without any power at all.
Also on the Internet of Things are monitoring sensors to avert a whole host of disasters. Those sensors are already on things like pressure gauges and overload detectors, deployed in industrial and commercial applications, like production lines, utilities and traffic. Until recently those sensors only sent data locally, to a control panel or maybe to another room in the same location. They will become much more common, and will send data anywhere. An engineer in Kentucky can be making decisions about a production line in Thailand using real-time information. That is IoT. Today’s experiments with self-driving automobiles are just a training step. Eventually self-driving vehicles will not rely on radar and optical sensors to know there is another car coming, they will know because the vehicles will be communicating with each other, and with the road itself. Smart pacemakers will not only detect and correct cardiac arrhythmias; they will transmit data to a hospital in real time, and send an ambulance if the device itself can’t be used to solve the problem.
On the less flashy, consumer side, the Internet of Things will be a refrigerator that will warn you, reading an RF id tag on the carton, that your milk has gone sour before you take that gut clenching sip. Your KitchenAid mixer will get instructions directly from the recipe you are reading on yummy.com to whip the perfect meringue, just before the fluffy peaks turn to gelatinous goo. Your bed will know you are walking from the living room to the bathroom, confirm that you have ended your Firefly marathon, receive information from your electric toothbrush, and calculate that you are going to be arriving soon. It will measure both the indoor and outdoor temperatures, then warm itself to a comfy 72 degrees so when you slip in your toes will be nice and toasty. That is the pretty side of IoT, but it has warts.
IoT, Warts and All.
Last week in the big WikiLeaks dump “Year Zero” of Vault 7, they claimed, with yet undisputed evidence, that the CIA has fairly easy access to all of your IoT devices. They have apparently been listening into private conversations through smart TVs, even when those TVs are off. WikiLeaks claims in their summary that the CIA can pretty much put a hit on anyone by hacking into the computers in their cars. If you have information the CIA wants and you talk about it on your smartphone, don’t bother sending the spy agency a transcript because they already know. It is easy to hear this and get ready to toss the IoT baby right out the window, but the spying, leaks and lack of privacy are the dirty bathwater. There will be both policy and enforcement issues at the government level that any new technology brings. And we will need to demand that they be addressed.
The inherent device security will improve only to the point that the end users demand. If news of the CIA hacking of smart televisions has no impact on sales of the devices, the market will have spoken. Manufacturers, programmers and governments are only going to take steps secure the IoT enough to keep the users in the game. Given generations of people who think it is perfectly normal to send the world news of their breakfast, who they are sleeping with, and when they drink too much, maybe privacy concerns themselves are changing.
The IoT also raises connectivity issues. When your car is getting 200 instructions per minute, it’s not okay if the connection drops for even a few seconds. We are going need, and come to expect, a connection that flows like water from our tap. It will not be acceptable for cable connections to slow down when all the kids get home from school at 4:00 and start playing Xbox. More reliable bonded internet connections are available now, but still mostly used in enterprise applications.
A bonded connection uses different communication channels to make a single, wider, more reliable pipe. For example, a cable hook-up may provide the most economical and fastest connection, and it will be paired with a cellular or satellite connection running next to it. The secondary connection may be significantly more expensive per megabyte, so technologies like Peplink’s SpeedFusion will route all but .00001% of the traffic to the primary connection while keeping the alternative channel open. In the event of a primary channel failure, or even just a hiccup, data will seamlessly flow into the secondary path without even a microsecond of delay. In order to be fully implemented the IoT is going to need these unbreakable connections. For more on how Frontier and Peplink are currently involved with IoT you can read this case study from our office in the Netherlands.
While it may not be time to nap while you are driving the freeway, and it is still a good idea to give the milk a little sniff before you take a big chug, it won’t be long before there is no buzz around the words “Internet of Things.” The connection of nearly every device in our lives will be a lot like breathing, something we don’t even think about until it doesn’t work right.
As it has since 1976, Frontier Computer can provide IT hardware and enterprise computing solutions. We have experts who can identify the best tools to prepare your business or organization to join the Internet of Things.
Purple is the color most often associated with royalty and magic. Roman magistrates and rulers of the Byzantine and Holy Roman Empires wore Purple. Roman Catholic bishops accessorize with lots of purple, and in Japan, it is the color symbolizing the Emperor and aristocracy. Purple is powerful stuff. So when we tell you need Purple for your public Wi-Fi we are not kidding around.
According to The Pew Research Center, 92% of millennials own a smartphone.
For most businesses, organizations, and municipalities maintaining public Wi-Fi is a cost associated with doing business, like keeping the restrooms clean. It’s an expense only indirectly related to sales. In some cases, Wi-Fi can even create risk, opening your clients — and through them your business — to potential vulnerability. Purple changes all of that. Purple offers three significant benefits to your business.
At the most basic level, Purple will enhance your client’s Wi-Fi experience. Using their existing social media connections, users can login to, and then quickly reconnect with your Wi-Fi network. Purple creates and maintains a secure, solid connection for each of your users. While that is a valuable benefit for your clients, and will certainly bring a share of positive will, it is just window dressing on the real advantages of Purple.
A single Facebook “Like” is worth about $135
Purple allows you to know and engage your customer as nothing has since the days when a General Store owner knew every client and all of the client’s family members by name. Using powerful analytics and information available through social media, Purple can give you an overall picture of who your clients are and snapshots of your engagements minute by minute. Purple can give you advanced client demographics, and even deeper knowledge of the things that are important to the people who visit your venue.
73% of millennials go to sleep with their smartphones by their bed.
The real impact of Purple happens once clients are in your store or venue or town. Purple can tell you exactly how long someone stays. It can tell you where they go inside your venue and what things cause them to remain in one place. With that valuable information, you can use purple to engage clients in real time. People rely on their smartphones, and are engaged by the messages they receive on them. Purple will enable you to communicate directly to your customers, on a device they trust. When visitors are browsing near a specific merchandise area, they can receive automatic notification of sales and promotions. You can send offers for their next visit, or offers to keep them engaged. Municipalities offering public Wi-Fi can give visitors a reason to return or linger with notices of coming events and activities.
For users 18 to 34 social networking the top smartphone activity.
Even after clients leave your location, Purple continues to work for you. Building on the Social Media connections established during their visit, you can continue to engage clients with offers and information that fit their current interests. You can use your previous clients to introduce new clients, and build advocacy.
Purple is the tool that transforms public Wi-Fi for public spaces: retail, stadiums and arenas, airports, libraries, schools, hospitals, restaurants, and your location. As it has for centuries, Purple means power, and just a little bit of magic. Today Purple has 8 million users making 52 million connections at 15,000 Venues. Using the public Wi-Fi that you already have installed, Purple can add your business, venue, or municipality to the growing number of people connecting with their constituents in meaningful ways.
As it has since 1976, Frontier Computer can provide IT hardware and enterprise computing solutions. We have experts who can identify the best tools to create Public Wi-Fi and assure you get the most out of it once it is in place.
Last week the world learned that Cloudflare servers have been leaking data for as long as five months. Before this, Cloudflare already had its share of controversy. In 2013, they were accused in a roundabout way of giving aid and comfort to the enemy because nefarious websites were found to use their service. Cloudflare’s CEO Matthew Prince responded in a blog post, “Cloudflare and Free Speech.” He argues that it is not Cloudflare’s role, or place to evaluate and police content on the websites that pass through their servers. Cloudflare has a good case. Again, in 2015, the hacking group Anonymous made the claim that Cloudflare was “helping” ISIS because they found ISIS related sites also using Cloudflare. The Cloudflare CEO called the claims “Absurd,” and in a way, he’s right. When we find out that ISIS terrorists use iPhones, we don’t think Apple is helping them. While the claim that Cloudflare “supports” ISIS is absurd, in light of the data breech, it may matter that Cloudflare is so widely used by an unvetted range customers.
What is Cloudflare?
Cloud flare is designed to serve up your content quickly to the right people while blocking the people with ill intent, including DDoS (Distributive Denial of Service) attackers. To do that, Cloudflare sits between a server and its users and controls and monitors the traffic. Last week Cloudflare publicly admitted that their servers had been leaking data. It was not a hostile attack, but a flaw in the Cloudflare system creating the breach.
Tavis Ormandy, a white hat hacker from Google’s Project Zero discovered the leak. Project Zero is a Google division searching for new vulnerabilities on the web. Ormandy found the problems while analyzing Google search results. He found raw data from unrelated websites appearing in searches for other sites. He figured out that Cloudflare was leaking data between its customers’ sites.
That’s just a scary as it sounds. Site A’s information showed up as raw data on Site B’s results. Great steps are taken to isolate one site from another on a shared server, and security experts employ measures, both hardware and software to avoid exactly this sort of attack from succeeding, and in that regard, Cloudflare gets respect. Since this was not an outside attack, Cloudflare never detected the leak. To way oversimplify, when the Cloudflare servers had a buffer overrun (too much data in one place, at the same time) some extra data leaked out. The leaks were discovered because search engines cached the extra information, which is where Ormandy found it.
Cloudflare has tried to downplay the breech. In a statement, Cloudflare described the “greatest period of impact” as from February 3, 2017 to February 18, 2017, just 15 days. The devil is in the details. They said “greatest,” not only period. We now know that data had been leaking since September 22, 2016, almost five months. Cloudflare suggests one out of every 3.3 million requests leaked. It sounds low, but the Internet is a busy place. Daily requests to Google alone exceed 4 billion. Many popular sites use Cloudflare; with all of those sites’ daily traffic flowing through Cloudflare servers; it is much more than a few leaks.
Cloudflare stated, “The bug was serious because the leaked memory could contain private information and because it had been cached by search engines. We have also not discovered any evidence of malicious exploits of the bug or other reports of its existence.” Read this carefully. They are not saying it didn’t happen, or won’t in the future, just that they did not discover any evidence. They did not mention how hard they looked for “malicious exploits.”
This is where Cloudflare’s unvetted roster of users has significance. The conventional wisdom is that this was not calamitous. Still, since data was leaking from one site, while accessing another, any user of a Cloudflare protected site may have gotten the private information. There could have been dire consequences if a few of those users, with less than honorable intent, realized what they had been served. Since we know there is no check on who uses Cloudflare, it is not a stretch to think that data has fallen into hands likely to exploit it. The data leaked includes cookies, authentication tokens, HTTP post bodies, and user data. Tavis Ormandy noted, “We fetched a few live samples, and we observed encryption keys, cookies, passwords, chunks of POST data and even HTTPS requests for other major Cloudflare-hosted sites from other users.”
Time to Panic?
Probably not. Still, this is serious business. Even Buzzfed, the kings of fluff, are suggesting “If you have or had accounts on Fitbit, Uber, OkCupid, Medium, or Yelp, you should probably change your passwords.” Many more sites than those few are involved. The short list of sites probably breached includes, authy, patreon, okcupid, fitbit, medium, 4chan, yelp, zendesk, uber, thepiratebay, pastebin, discordapp, change.org, feedly, nationalreview, petapixel, and tineye. There are many, many more. If you enjoy horror films, check the list of Cloudflare sites on Github.
In the short term, updating your passwords is an excellent idea, actually a good idea every few months even without breaches. If your site uses Cloudflare, it’s time to clear all session data. It is also prudent to require your site members to change their passwords, and absolutely change your admin level passwords. While the Cloudflare leak probably isn’t the end of the world, security wise, it does raise one long-term caution about the users on shared servers.
As more and more of the world is virtualized, and more and more of that virtualization is on large, publicly shared servers, perhaps the Cloudflare policy of allowing access without a scrutiny is outdated. No one is claiming that Cloudflare was negligent. Perhaps it is just not possible to isolate sites sharing the same space. Breaches may come from human error, failure, or attacks. If recent history has shown anything, it’s that there will be leaks. In this case the leak was across domains, but sent out publicly. Ormandy found it in Google’s public search results. Had he not, Cloudflare still may not have discovered the breach. Had the information leaked from one domain to another, the fate of the leaked domain and its users would have been in the hands of the owners of the domain receiving the leaks. This incident asks if it might be time to consider which other clients are using or passing through the same servers where your cloud is parked. When you live in a gated community, the greatest vulnerability is from your neighbors.
As it has since 1976, Frontier Computer can provide IT hardware and enterprise computing solutions. We have experts who can identify the best, safest cloud or physical storage for your data.
Our Dutch counterparts at FrontierBV published this article last week. We are sharing it here with slight modification for a US audience.
Most of the European emergency services communicate and coordinate through the TETRA (Terrestrial Trunked Radio) system. It is also known as the C2000 system and it is not new. It has an advantage over other technologies, such as GSM networks that exist throughout Europe (AT&T and T-Mobile cellular in the US), because its low frequency give it a very long range, even in remote places. The significant drawback, is that TETRA is a primarily a voice system with very low data rates. In more than a few occasions, the system has failed when it was needed most. Given threats Europe has faced in the last few years, quick and through response from emergency services should not be hampered by outdated communications technology.
The European C2000 emergency services communication system broke-down after the 2009 crash of Turkish Airlines Flight 1951 at Schiphol Airport in Amsterdam. Last year, on the morning of March 22, 2016, three coordinated suicide bombings occurred in Belgium: two at Brussels Airport in Zaventem, and one at Maalbeek metro station in central Brussels. Both incidents created an overload of the system, preventing messages from reaching everyone involved. The radio based European emergency communication system cannot handle the data load of modern disasters.
European governments are currently working to improve the existing C2000 network. However, the authorities are not exploring truly innovative communications ideas. Discussions have centered around voice communications to replace the existing primarily voice system at a time when any new system should enable video and photo sharing for better, immediate situational assessment and appropriate response. This level of communication requires a robust connection to control rooms, command centers and related hotlines. The TETRA network will never have sufficient throughput, and these more data intensive communications should be diverted to other technologies. The current network could continue to be used for voice communications with a faster pipeline employed for data, and overflow communications.
Peplink a Possible Solution
Peplink produces a line of routers, access points and related software to allow unbreakable communication. Many Peplink solutions are used in mobile and machine-to-machine communication, including law enforcement, and have direct application to emergency services over a range of departments. Current Peplink technologies makes it possible to have a secure, fast connection in even the most remote locations. The Peplink devices band together multiple channels and technologies and can seamlessly switch between wireless WAN, 3G / 4G / 5G, and satellite connections. The Software Defined WAN (SD-WAN) always selects the best connection option, and can dynamically change from one type of connection to another without any interruption to the users.
Flexible, Wide Band Connection for Emergency Services
Throughout Europe there are cameras monitoring public streets and spaces. With the bandwidth available through bonded communications, image feeds from those municipal cameras could be instantly transmitted to vehicles or command centers. Police and the fire departments could react quickly and more securely by understanding the situation before they even arrive. In addition, video could be directly shared between vehicles, police helicopters and emergency rooms. Police and fire departments all over the world already use Peplink for communications. Extending that technology to interdepartmental networks makes sense.
The Dutch ambulance service is developing software to connect live video and audio from the scene of an emergency directly to physicians in the hospital, allowing ambulance personnel to respond more quickly and appropriately. In addition, vital information can be communicated from the ambulance to hospital staff so they are ready when the ambulance arrives, for faster, more efficient patient care. This sort of system will require bandwidth that Peplink technology can deliver without interruption.
European emergency service networks would do well to employ Peplink’s unbreakable power and failover connections to avoid communications failures. That same technology is readily available for any sized business and even home users. Frontier Computer Corp., together with our partner resellers, can find the right Peplink solution for your application.
We stock the full line of Peplink and Pepwave solutions.
Peplink and Pepwave have brought innovation and reliability to the communications market. The Peplink line has revolutionized SD-WAN with their Fusion Hub technology and InControl network management software. The Pepwave line has created a continuous never-fail connection for thousands of mobile installations around the globe. Until now, however, Peplink did not offer a networking switch.
The new Peplink SD Switch has changed that. Keeping with the Peplink tradition, the new switch is more than just another in an already crowded field. The Peplink SD Switch stands out for its cloud-based management, its unbreakable power, and its instant network-wide troubleshooting.
The PSW-24-850W Switch has twenty-four 802.3at(PoE+) ports with two 10G SFP+ ports in a standard rack mountable enclosure. The real difference is what lies inside. External power can be connected with redundant AC inputs for connecting two independent power sources creating uninterrupted power to the unit. The 850W also has three redundant 300-watt power supply units inside. The system itself uses only 50 of those watts, balancing the load between the three PSU for longer life, and leaving an 850 watt power budget for connected devices. Since the load is balanced between all three power supplies, one could fail and the device would continue to function. As is the expectation for all Peplink devices, the SD Switch can keep running even when something unexpected happens.
Peplink also includes its robust InControl2 cloud management software. InControl2 allows remote management of all SD Switches and other Peplink devices. VLAN Managers can remotely configure ports, check and manage power, and troubleshoot problems from anywhere in the world. Each device on the network can be instantly searched and identified to diagnose problem connections and peripherals. Peplink even has an app that will allow monitoring, control, and troubleshooting of an entire network from a smartphone.
Frontier Computer Corp. has Peplink PSW-24-850W SD switches in stock and ready to ship. Our Peplink certified product managers can help you add SD Switches to your network. Contact us today to get Peplink’s quality and innovation.
We stock the full line of Peplink and Pepwave solutions.
People in the IT Business can chat for hours in an alphabet soup of code words that fly over the heads of even some of the technically connected. It’s easy to tune them out (and often a saved migraine). Sometimes, however, it is good to know what all those letters mean. We have moved past the days when tech people did their thing and the rest of us just went on with our lives. In the case of planners in any business, even poets, it is important to know what those tech guys are droning on about, and even ask a few questions occasionally. In the Primer of Basic IT Acronyms, the connection terms LAN, WAN, and SD-WAN are lesson one.
We all use the internet. Every single person in every business is connected. In an office setting each new employee gets a login and email address on the first day. The teen at the fast food counter is connected through the touch screen menu. Think of any industry, enterprise, or organization and it doesn’t take two minutes to find the on-line connection. Your home washing machine will soon need a login.
All of these people and devices are connected through a NETWORK. Network is the “N” in LAN and WAN. All of this relates to your network, how the devices in it, and the people who use them are connected.
LAN stands for LOCAL AREA NETWORK. Long ago, about 2005, wires characterized LANs. You had a LAN if every computer on your network was connected with Ethernet cables (or in the case of AppleTalk, telephone extension wires). In the dark ages, before 1995, there were LANs that did not connect beyond the rooms or building where they were housed. They mostly required software and hardware. Those of us old enough to have been there still remember wanting to punch the Novell NetWare guy when he claimed, “Everything is working just like it should.” Those early LANs allowed printer sharing, local file sharing, limited messaging, and nothing else.
LANs have changed. Now the Local in LAN really is a designation of proximity and access. Your Wi-Fi network at home is a LAN. Now LANs can also be much, much bigger. The entire campus of a university might be a single LAN. They share a common connection, be it wired, wireless or a combination of both. The limit is really who owns and manages the connection. When all of the connection made inside the LAN have a single owner and manager, and they share a pipe, and you have a LAN.
There is some grey area here, which should be fine with the poets. WAN stand for WIDE AREA NETWORK. A WAN can seem a lot like a LAN. Theoretically, a LAN could be bigger than a WAN. The difference is, a WAN goes outside of the local channels and makes connections through other flowing streams. If a business has offices in three countries, they will need to go through public connection channels to maintain a network between their distant local networks.
To over simplify, a LAN is a building with only one or two well-guarded doors. The visitors are only allowed inside after they are checked at the door. Once inside, they can move around freely and don’t need another ID check. A WAN is a far more public building. There are people from all over moving through it at any one time, but only those with the right name tag can get through certain doors. WANs are inherently less secure, and because of all the ID checking, data moves less efficiently through a WAN. Your ISP (Internet Service Provider; for the poets: Comcast, Time Warner, Charter) is really just a gargantuan WAN connecting you to the Granddaddy of all WANs, the internet.
Since they are so complicated, WANs were usually property of an outside vendor or required propriety hardware. Some major player will have a WAN comprised of leased, private lines spanning continents. If Friendly Bank wanted to connect its 400 branches in three states it had to pay monthly for dedicated lines through Mega Telecom and/or lease hardware from WANs-r-Us. The management of those WANs also required staff.
An SD-WAN, on the other hand, requires much less third party involvement. SD-WAN stands for SOFTWARE DEFINED-WIDE AREA NETWORK. It is a private, wide connection with the security, routing and connecting done with software using the existing data flow. Even a smaller enterprise with only a few offices can have an SD-WAN to connect their one sales rep in the UK to the main office in Arkansas without having to run a T1 line (really expensive data pipe) to her house. In contrast to a traditional WAN, its software-defined replacement does not rely on a single provider for a connection or hardware. SD-WAN can be much less expensive, much more flexible, and in some case, more reliable.
SD-WAN does not need a specific pipe or channel. It can use whatever pathways are available, even switching between them, and delivering a connection to the user that seems hard wired. Peplink’s Speed Fusion SD-WAN technology will bond together different connections to present a steady flow of data. Speed Fusion will pair a cellular connection, a cable connection, and even a fiber connection, then switch the data from one to the other seamlessly and securely in a way that is completely transparent to the user. The result is a network connection that never goes down.
SD-WAN is a Wide Area Network for any sized organization. Peplink routers have the SD-WAN technology built in. Because it is software based, data flow, management, connections, and control can be centrally managed from one remote location. Peplink has even created an app that will allow a full network to be managed from a smart phone.
You do not have to be an engineer to understand how SD-WAN can give your enterprise a connection advantage. An SD-WAN is relatively simple, flexible, and inexpensive to implement and maintain. With the range of solutions from Peplink, SD-WAN is easily available. Even Poets need to be connected.
Frontier Computer is the largest stocking distributor of Peplink SD-WAN solutions. Our engineers can help you create an SD-WAN for your clients. We cannot help you with a tricky sonnet.