I'm A Mainframe Bigot

I make no apologies for my bigotry when I recommend mainframes for the new economy. Dollar for dollar, a properly managed mainframe environment will nearly always be more cost effective for our customers to run. This doesn’t mean there aren’t exceptions, but we aren’t talking about the outliers – we’re looking at the masses of data that support this conclusion.
To level-set this discussion: If you’re not familiar with mainframes, move along.
We aren’t talking about the Matrix “Neo, we have to get to the mainframe” fantasy world here. We’re talking about “Big Iron” – the engine that drives today’s modern economy. It’s the system where most data of record lives, and has lived for years. And this is a philosophical discussion, more than a technical one.
I’d never say there aren’t acceptable use cases for other platforms. Far from it. If you’re running a virtual-desktop solution, you don’t want that back end on the mainframe. If you’re planning to do a ton of analytics, your master data of record should be on the host, and likely there’s a well-thought-out intermediate layer involved for data manipulation, mapping and more. But if you’re doing a whole host (pun intended) of mainstream enterprise computing, IBM’s z systems absolutely rule the day.
I remember when my bank sold off its branch network and operations to another regional bank. It wasn’t too many years ago. As a part of this rather complicated transaction, bank customers received a series of letters informing them of the switch. I did some digging and found out the acquiring bank didn’t have a mainframe.
I called our accountant, and we immediately began a “bake off” among various banks to decide where to move our banking. Among the criteria? Well-integrated systems, clean IT environment, stability (tenure) among bank leadership, favorable business rules and practices, solid online tools, and of course, a mainframe.
So what’s my deal? Why the bigotry? Sure, there are issues with the mainframe.
But I, and by extension TxMQ, have been doing this a long time. Our consultants have collectively seen thousands of customer environments. Give us 100 customers running mainframes, 100 customers who aren’t, and I guarantee there are far more people, and far greater costs required to support similar-size adjusted solutions in non-mainframe shops.
Part of the reason is architecture. Part is longevity. Part is backward-compatibility. Part is security. I don’t want to get too deep into the weeds here, but in terms of hacking, unless you’re talking about a systems programmer with a bad cough, the “hacking” term generally hasn’t applied to a mainframe environment.
Cloud Shmoud
Did you know that virtualization was first done on the mainframe? Decades ago in fact. Multi-tenancy? Been there, done that.
RAS
Reliability, Availability and Serviceability define the mainframe. When downtime isn’t an option, there’s no other choice.
Security
Enough said. Mainframes are just plain more secure than other computer types. The NIST vulnerabilities database, US-CERT, rates mainframes as among the most secure platforms when compared with Windows, Unix and Linux, with vulnerabilities in the low single digits.
Conclusion
I had a customer discussion that prompted me to write this short piece. Like any article on a technology that’s been around for over half a century, I could go on for pages and chapters. That’s not the point. Companies at times develop attitudes that become so ingrained, no one challenges them, or asks if there’s any proof. For years, the mainframe took a bad rap. Mostly due to very effective marketing by competitors, but also because those responsible for supporting the host began to age out of the workforce. Kids who came out of school in the ’90s and ’00s weren’t exposed to mainframe-based systems or technologies, so interest waned.
Recently, the need for total computing horsepower has skyrocketed, and we’ve seen a much-appreciated resurgence in the popularity of IBM’s z systems. Mainframes are cool again. Kids are learning about them in university, and hopefully, our back-end data will remain secure as companies realize the true value of Big Iron all over again.

How it Works: KPIs and the future of your business

How do you measure your business goals?
For some companies, success is measured mostly by profits. For others, the key indicator is customer satisfaction. Some companies measure their success by the success of their products. For each of them, however, there is one guarantee – success is never measured by just one criteria.
Not only is it essential to learn what drives success, but smart business executives must also understand why. That means you can’t just collect data, you have analyze it to learn what the information means to your bottom line. Keeping track of all this nuanced information, however, can become extremely overwhelming. That’s why businesses of all sizes turn to software that collects and analyzes key performance indicators, or KPIs.
KPIs help you monitor and manage the metrics that impact company growth. For example, maybe you’re a retailer looking to boost sales by 10% in Q3. KPIs can be used to help you project how to adapt labor and product costs to achieve this goal. KPIs aren’t just for overall company goals. They bring insight into individuals and departments, too. For example, your social media manager should have a list of KPIs that determine whether or not the company’s campaigns are successful at generating qualified leads. Your helpdesk team would work more productively if they had KPIs that kept track of how quickly and effectively they resolve tickets.
Most likely, your business uses KPIs in some form or fashion, but it’s how you utilize the data that makes the real difference. The whole process can even be automated, so your executives don’t need an IT degree to interpret data. Business activity monitoring tools harness big data analytics to bring insight to a broad range of users, from line-of-business to accounting to administrative. This means easy access for the people who need to apply the data toward decisions that impact the whole company.
IBM’s business activity monitoring solution not only provides you with current data, but it also helps you predict future situations by analyzing potential “what-if” scenarios. You can empower your company with analysis-driven strategies, becoming more proactive and less reactive.
The IBM solution isn’t the only one out there, but what set it apart is it’s flexibility. It works for companies of all sizes—from large-scale enterprises to SMBs. So whether you’re a small retailer looking to bring in more customers or a Fortune 500 ready to start a rebranding process, KPI software solutions can help you transform slow and expensive processes into strategies that help you grow.

Upgrade Windows Server 2003 (WS2003) – Do It Today

Another day, another end-of-support announcement for a product: On July 14, 2015, Windows Server 2003 (WS2003) goes out of support.
Poof! Over. That’s the bad news.
What’s the upside? Well, there isn’t really an upside, but you should rest assured that it won’t stop working. Systems won’t crash, and software won’t stop running.
From the standpoint of security, however, the implications are rather more dramatic.
For starters, this automatically means that anyone running WS2003 will be noncompliant with PCI security standards. So if your business falls under these restrictions – and if your business accepts any credit cards, it certainly does – the clock is ticking. Loudly.
There’ll be no more security patches, no more technical support and no more software or content updates after July 14, 2015. Most importantly, this information is public. Hackers typically target systems they know to be out of support. The only solution, really, is to upgrade Windows Server 2003 today.
TxMQ consultants report that a large percentage of our customers’ systems are running on Windows Server, and some percentage of our customers are still on WS2003. There are no terms strong enough terms to reinforce the need to get in touch with TxMQ, or your support vendor, for an immediate plan to upgrade Windows Server 2003 and affected systems.
Server migrations oftentimes take up to 90 days, while applications can take up to 2 months. Frankly, any business running WS2003 doesn’t have 60 days to upgrade, let alone 90. So please make a plan today for your migration/upgrade.

IRS Get Transcript Breach – The Agency Didn't Adequately Prepare

The announcement came yesterday: Chinese hackers had breached the federal government’s personnel office. In isolation, this might seem a single event. But when viewed in the grouping of several other top-level hacks, it becomes clear that the federal government is extremely vulnerable.
One clear parallel was the recent IRS Get Transcript breach, announced in late May, which is believed to trace to the Soviet Union. The information was taken from an IRS website called Get Transcript, where taxpayers can obtain previous tax returns and other tax filings. In order to access the information, the thieves cleared a security screen that required detailed knowledge about each taxpayer, including their Social Security number, date of birth, tax-filing status and street address. The IRS believes the criminals originally obtained this information from other sources. They were accessing the IRS website to get even more information about the taxpayers, which would help them claim fraudulent tax refunds in the future. Might the information in the more recent hack also provide the fuel for a future hack? Quite likely, in my opinion.
What’s especially bothersome to me is the IRS had received several warnings from GAO in 2014 and 2015. If the warnings had been implemented, there would have been less of an opportunity for the attack. The IRS failed to implement dozens of security upgrades to its computer systems, some of which could have made it more difficult for hackers to use an IRS website to steal tax information from 104,000 taxpayers.
In addition, the IRS has a comprehensive framework for its cybersecurity program, which includes risk assessment for its systems, security-plan development, and the training of employees for security awareness and other specialized topics. However, the IRS did not correctly implement aspects of its program. The IRS faces a higher statistical probability of attacks, but was unprepared. Let’s face it: The US federal government is a prime target for hackers.
The concern here, of course, is the grouping of attacks and the reality that the US government must be more prepared. I’ve managed IT systems and architecture for more than 3 decades and I’ll say this: The IRS testing methodology wasn’t capable of determining whether the required controls were in effective operation. This speaks to not only physical unpreparedness, but a general passive attitude toward these types of events and the testing protocols. The federal government doesn’t adequately protect the PII it collects on all US citizens, and simply sending a letter to those impacted by a breach is not enough to prevent recurrence in the future.
I don’t need to tell you that. The GAO told the IRS the same thing: “Until IRS takes additional steps to (1)address unresolved and newly identified control deficiencies and (2)effectively implements elements of its information security program, including, among other things, updating policies, test and evaluation procedures, and remedial action procedures, its financial and taxpayer data will remain unnecessarily vulnerable to inappropriate and undetected use, modification, or disclosure.”
These shortcomings were the basis for GAO’s determination that IRS had a significant deficiency in internal control over financial-reporting systems prior to the IRS Get Transcript Breach.
Author Note: In my next blog on security, I’ll talk about the NIST standard for small businesses, with recommendations to prepare and protect in the wake of these high-level breaches.
(Photo by Ray Tsang)

Do health-tracking wearables actually make us healthier?

When I’m not writing all about the health IT world, I am a personal trainer, and it never ceases to amaze me how often these two worlds collide. The other day one of my training clients said to me – “I’ve gained almost 10 pounds since I got my <insert name of popular health-tracking device here>. Isn’t it supposed to do the opposite?”

 I thought about it for a minute.

 “Well,” I began, “Do you wear it every day? Have you forgotten it any?”

 The client shook her head. “I only take it off to shower and to charge it.”

 I thought a little more.

 “How have your behaviors changed since you started wearing it?”

 Now she looked at me strangely. She shrugged. I asked her to keep wearing her health-tracking fitness band, but to also go back to keeping a journal where she logs in her activities and her food. In addition, I asked her to also log in how often she consults her wearable.

 When she came back to me the next week, she handed over the journal. It didn’t take long to see what the problem was. In the evenings, when my client had consulted her health-tracking device (which she does about a billion times a day), she would then consume the exact amount of calories she had remaining in order to come in right at her daily goal. However, sometimes these snacks consisted of highly processes carbohydrates and sugars. In addition, her fitness band had no way of knowing her muscle mass or the speed at which she metabolizes specific types of food.

 Technology plays an enormous and essential role in the detection, diagnosis and treatment of many life-threatening diseases. Digital devices monitor heartbeats and blood pressure, all able to be analyzed by the amazing connectivity of the Internet of Things. We can cure and prevent more illnesses than ever imagined before with digestible sensors, hybrid operating rooms and 3D printed biological materials. However, as a fitness professional, I’m not talking about that kind of technology. I’m talking about the kinds of health-tracking gadgets, like wristbands, apps and trackers, that have become as common place as the timeless Timex. Can these wearables really stop, or even reverse, the American obesity epidemic?

The answer is — it depends. In my client’s case, no. Or, not exactly. She was using the fitness band to justify eating poorer quality foods more often. For some people, however, they do work amazingly. I regularly meet marathon runners who worship the Garmin watches that help them track speed, as well as fitness band enthusiasts who saw the fat melt away from the moment they plugged in. The crux is this – in order to live a healthier lifestyle, you have to change human behavior. While these health-tracking devices cannot force behavior change, they can make us more aware of our actions and choices. 

Interested in using health & fitness tech to kickstart or continue your healthy lifestyle? Check out CNet’s review of top wearables under $200:

Rigorous Enough! MQTT For The Internet Of Things Backbone

The topic of mobile devices and mobile solutions is a hot one in the IT industry. I’ll devote a series of articles to exploring and explaining this very interesting topic. This first piece will focus on MQTT for the Internet of Things – a telemetry functionality originally provided through IBM.
MQTT provides communication in the Internet of Things – specifically, between the sensors and actuators. The reason MQTT is unique is, unlike several other communication standards, it’s rigorous enough to support low latency and poor connectivity and still provide a well-behaved message-delivery system.
Within the Internet of Things there’s a universe of devices that provide inter-communication. These devices and their communications are what enables “smart devices,” and these devices connect to other devices or networks via different wireless protocols that can operate to some extent both interactively and autonomously. It’s widely believed that these types of devices, in very short time, will outnumber any other forms of smart computing and communication, acting as useful enablers for the Internet of Things.
MQTT architecture is publish/subscribe and is designed to be open and easy to implement, with up to thousands of remote clients capable of being supported by a single server. From a networking standpoint, MQTT operates using TCP for its communication. TCP (unlike UDP) provides stability to message delivery because of its connection-oriented standard. Unlike the typical HTTP header, the MQTT header can be as little as 2 bytes, and that 2 bytes can store all of the information required to maintain a meaningful communication. The 2 bytes store the information in binary using 8 bits to a byte. It has the capability to add an optional header of any length. The 2 bytes of the standard header can carry such information as QOS, type of message, clean or not.
The quality-of-service parameters control the delivery of the message to the repository or server. The options are:

Quality-Of-Service Option Meaning
1 At most once
2 At least once
3 Exactly once

These quality-of-service options control the delivery to the destination. The first 4 bits of the byte control the type of message, which defines who’ll be interested in receipt of these messages. The type of message indicates the topic of the message, which will manage who receives the message. The last element will be the clean byte, which like the persistence in MQ will determine whether the message should be retained or not. The clean option goes a step further in that it will also tell the repository manager whether messages related to this topic should be retained.
In my next blog I’ll discuss the broker or repository for these messages. There are several repositories that can be used, including MessageSight and Mosquitto among others. The beauty of these repositories is their stability.
(Photo by University of Liverpool Faculty of Health & Life)

MQ In The Cloud: How (Im)Mature Is It?

Everyone seems to have this concept that deploying all of your stuff into the cloud is really easy – you just go up into there, set up a VM, install your data and you’re done. And when I say “everyone” I’m referring to CIOs, software salespeople, my customers and anyone else with a stake in enterprise architecture.
When I hear that, immediately I jump in and ask: Where’s the integration space in the cloud today? Remember that 18 or 20 years ago we were putting up application stacks in datacenters that were 2- or 3-tier stacks. They were quite simple stacks to put up, but there was very little or no integration going on. The purpose was singular: Deal with this application. If we’d had a bit more foresight, we’d have done things differently. And I’m seeing the same mistake right now in this rush to the cloud.
Really, what a lot of people are putting up in the cloud right now is nothing more than a vertical application stack with tons of horsepower they couldn’t otherwise afford. And guess what? That stack still can’t move sideways.
Remember: We’ve been working on datacenter integration since the old millennium. And our experience with datacenter integration shows that the problems of the last millennium haven’t been solved by cloud. In fact, the new website, the new help desk, the new business process and solutions like IBM MQ that grew to solve these issues have all matured within the datacenter. But the cloud’s still immature because there’s no native, proven and secure integration. What we’re doing in the cloud today is really the same thing we did 20 years ago in the datacenter.
I’m sounding the alarm, and I’m emphasizing the need for MQ, because in order to do meaningful and complicated things in the cloud, we need to address how we’re going to do secure, reliable, high-demand integration of systems across datacenters and the cloud. Is MQ a pivotal component of your cloud strategy? It’d better be, or we’ll have missed the learning opportunity of the last two decades.
How mature is the cloud? From an integration standpoint, it’s 18 to 20 years behind your own datacenter. So when you hear the now-familiar chant, “We’ve got to go to the cloud,” first ask why, then ask how, and finish with what then? Remind everyone that cloud service is generally a single stack, that significant effort and money will need to be spent on new integration solutions, and that your data is no more secure in the cloud than it is in a physical datacenter.
Want to talk more about MQ in the cloud? Send me an email and let’s get the conversation started.
(Photo by George Thomas and TxMQ)

What The Premera Breach Teaches Us About Enterprise Security

By TxMQ Middleware Architect Gary Dischner
No surprise to hear of yet another breach occurring – this time at Premera Blue Cross. The company became aware of a security breach on Jan. 29, 2015, but didn’t begin to notify anyone involved (including the state insurance board) until March 17, which was 6 weeks later. The actual attack took place in May 2014 and may affect 11 million customer records dating back to 2002.
As with many companies that experience a security breach, the excessive delays in first identifying and confirming that a breach has occurred, coupled with the typical delays in assessing and providing notification, subsequently led the state insurance board to fault Premera with untimely notification. A review of the HIPAA regulations for breach reporting indicates that a notification of those impacted absolutely needs to occur within 60 days. Many companies, including Premera, just aren’t equipped with the tools and security-management processes to handle these incidents. For Healthcare companies, HIPAA guidelines state that notification to the state insurance commissioner should be immediate for breaches involving more than 500 individuals. Consequently, Premera is now being sued by the state insurance commissioner.
A company found guilty of late notification should concern the public: There’s at least the appearance of a general lack of concern over both the impact and severity to its customers, partners and constituents. Blue Cross Premera has responded to its own behavior with efforts to protect itself and to cover up details of the incident, rather than be forthright with information so that those impacted can take the needed steps to protect themselves from further exposure and potential consequences, such as fraud and identify theft.
A secondary concern is the lack of security-management measures around protected data at many companies. In this case, the audit recommendations – which had been provided to Premera on Nov. 28, 2014 – found serious infractions in each of the following domains:

  • Security management
  • Access controls
  • Configuration management
  • Segregation of duties
  • Contingency planning
  • Application controls specific to Premera’s claims-processing systems
  • HIPAA compliance

More and more companies are being reminded of the data exposures and related risks, but remain slow to respond with corrective measures. Companies of high integrity will take immediate responsive measures and will openly express concern for the repercussions of the exposure. Companies that do not? They should be dealt with severely. Let this Premera example serve as the Anthem breach for companies that are holding sensitive data. As a customer or business partner, let them know you expect them to take every measure to protect your healthcare and financial information.
And in closing, let’s all take away a few lessons learned. Security assessments must become a regular operational function. Self-audits demonstrate a company’s high integrity and commitment to identifying process improvements for security management. Such efforts should be assessed quarterly with reports to the company board to make sure every vulnerability is remediated and customers who are working with the company are protected. After all, it’s only the company that can secure its own technical environments.
Photo by torbakhopper

The Need For MQ Networks: A New Understanding

If I surveyed each of you to find out the number and variety of technical platforms you have running at your company, I’d likely find that more than 75% of companies haven’t standardized on a single operating system – let alone a single technical platform. Vanilla’s such a rarity – largely due to the many needs of our 21st-century businesses – and with the growing popularity of the cloud (which makes knowing your supporting infrastructure even more difficult) companies today must decide on a communications standard between their technical platforms.
Why MQ networks? Simple. MQ gives you the ability to treat each of your data-sharing members as a black box. MQ gives you simple application decoupling by limiting the exchange of information between application endpoints and application messages. These application messages have a basic structure of “whatever” with an MQ header for routing destination and messaging-pattern information. The MQ message become the basis for your inter-communication protocols that an application can access no matter where the application currently runs – even when the application gets moved in the future.
This standard hands your enterprise the freedom to manage applications completely independent of one another. You can retire applications, bring up a new application, switch from one application to another or route in parallel. You can watch the volume and performance of applications in real-time, based on the enqueuing behavior of each instance to determine if it’s able to keep up with the upstream processes. No more guesswork! No more lost transactions! And it’s easy to immediately detect an application outage, complete with the history and how many messages didn’t get processed. This is the foundation for establishing Service Level Management.
The power of MQ networks gives you complete control over your critical business data. You can limit what goes where. You can secure it. You can turn it off. You can turn it on. It’s like the difference between in-home plumbing and a hike to the nearest watersource. It’s that revolutionary for the future of application management.

The Difference Between Software Asset Management & Software Asset Managed Services

Do you need SAM or SAMS? The distinction’s important.
SAM is Software Asset Management – a big-brush, cost-control effort that typically describes internal efforts to optimize the software investment. Every business needs SAM.
SAMS is Software Asset Managed Services – a more intensive effort that involves the hiring of outside consultants to document, manage and optimize the software investment. SAMS delivers an impartial eye to the enterprise software stack and relieves licensing and regulatory burdens from internal teams. At the same time, SAMS is built to scale so the consultants can smoothly assume a growing management responsibility for the environment. Internal teams are further unburdened from the day-to-day environment problems and are free to focus on business development, new applications, creative solutions and continuous improvement.
Is SAMS right for your business? If you struggle to implement SAM, or if the cost of maintaining your environment is spiraling out of control, then you probably need SAMS.
Assess Your Assets
You need to know what’s running on your stuff. It’s just that simple. But the quest to find the answers can be surprisingly difficult. Some software’s inactive but still deployed. Employees may have downloaded unlicensed copies to fight a vital fire. Auto-renewal payments for licenses may be festering on a former accountant’s laptop.
The effort to control and optimize your software investment starts with wanting to know what you don’t know. TxMQ utilizes the following workflow in its Software Asset Management (SAM) business line. It’s also the initial stage of its Software Asset Managed Services (SAMS) program.
Step 1: Enterprise Discovery
Your business needs to know exactly how many servers are in use. Are the servers spread out geographically? Internationally? What software is deployed? Is it active or legacy? Is it single-sourced? Mixed? Open-source? How sophisticated is security? Is there a single sign-on for the software? What’s the server-level security? How do employees access the servers? What’s the nature of the business?
Step 2: Document The Software
This step determines exactly what software is installed on the servers, the versions, at what levels, how many instances, and how many seats?
Step 3: Determine The SAM Products To Install
Is the goal to manage the assets, or simply audit them? The choice of SAM products and tools is important.
Step 4: Determine Audit Level
Does the enterprise what to perform a simple audit, or a complete audit that includes update histories, patching and software-lifecycle analysis.
Step 5: Develop Reporting Process
Are there limitations to the audit? Who prepares the report? Who sees it?
TxMQ specializes in middleware management and application integration. Licensing is a huge part of our business. We can help. Call or email now for a free, confidential consultation. Reach us before the auditors reach you!
Photo by Daniel Iversen