The Legacy of Legacy Applications and Cloud Enablement

The Legacy of Legacy Applications

When the tombstone is written one day on legacy applications (metaphorically mind you, this logically will never happen), it may read something like: “We knew ye well, yet hardly knew ye!”

There is a ton of noise in the marketplace lately about legacy applications, especially as relates to moving to the cloud. Rebuilding them, replatforming them, cloud enabling them, outsourcing their management, you name it, it’s being blogged about, discussed at conferences, and white papered to death. To be sure, you are reading such a document right now aren’t you?

I have only a few things to add to the discourse so I’ll approach this piece as a bit of an aggregation of my readings, mixed in with some musings gleaned over my 30 years in the field from some of my direct experiences.

The industry seems to have silently adopted the term ‘Legacy applications” to reference all the old, messy stuff companies run. It includes old ERP systems, mainframe applications, old websites, or ecommerce applications, point solutions, third party systems, and even home grown systems written by long retired developers, many times with little documentation.

These systems keep working (in most cases), may have indeterminate application (and/or end point) interdependencies, and in general, the thought of unplugging them scares the daylights out of most IT managers. Simply put, in many cases, they don’t know exactly what many of these applications do. Thus the fear of turning off even an apparently isolated, rarely run and little understood application, might have a disastrous cascading impact on other systems.

The fact is much of what we lump into the Legacy Software bucket is software that keeps the lights on for companies.   These are core systems that drive significant revenue, and often have high availability requirements that make a cloud discussion, and the latency inherent in such a conversation problematic.

Among many challenges of this software is the time and resource commitment companies make to manage it.   When too much energy is devoted to legacy applications, too little remains to focus on growth, strategic initiatives, and of course, evaluating alternate ways of deploying new workloads.

So what to do, and where to start?

An important initial step to take in evaluating cloud options (by cloud, let us presume we are entertaining public, private and hybrid as options for now), is portfolio (application) management.

This means being aware of all the software you own (or run, in the case of some mainframe subsystem software like DB2, CICS, where ownership remains with IBM, while customers pay a monthly license charge). This is often a harder mountain to climb than most companies want to admit. Thus this is often when companies engage with providers like TxMQ to begin an assessment, sometimes yes, including building a complete catalog of applications in their portfolio. From here, a discussion ensues around the business value of the applications. This step is critical, as it is imperative we match resources to applications based on their business criticality. We often see situations where companies devote as much time and energy, and thus precious dollars, to non critical but poorly performing applications, as to mission critical revenue generating software.

Once a complete view exists of software (and of course hardware) assets, the conversation can move to assigning priority to these assets. Prioritization includes looking at the lifecycle of applications, as well as the infrastructure required to support them. Knowing an application’s payback (or ROI), as well as it’s life expectancy is nearly as important as knowing it’s function.

Problems often arise during staff turnover, where critical legacy application knowledge departs as employees come and go.   This can often (and should) lead to an important asset management conversation as well.

Once we have a handle on the relative priority of applications, we can start the conversation around where and how to run them. The end goal being cost control. By properly matching resources to needs, we will be able to free up resources (people and dollars) to once again, focus on innovating, something IT has been hard-pressed to do when focused on legacy application maintenance and firefighting.

The goal, remember, is to be able to continuously deploy software, rapidly and of high quality. IT can, and should be a part of the strategy conversation, not viewed as a necessary evil to keep the corporate lights on.

In truth, most companies get caught in this “I don’t have time to innovate, we are to busy putting out fires” mentality. This is a never-ending cycle, which has permeated IT for years. This portfolio realignment as a path to cloud enablement can help end this fatal loop.

There are several established methods TxMQ (and others) use to help customers evaluate and prioritize their legacy applications. The goal, in the end, is to identify areas for quick wins. It is inevitable that some legacy applications are simply monsters, and will require massive rework, or rethinking, and companies have a tendency to get lost in this sauce. Let’s not open that can of worms first.

Lets learn to start with the easy wins. Many java and .net legacy applications are excellent early candidates for cloud enablement with little code tweaking required.

It is important too, that any corporate sacred cows be identified early on for ‘special handling’. Many an IT project, and more than a few IT leaders have seen their early demise by carelessly poking around in the data center. Political minefields exist in all companies and should be delicately identified and avoided. Sometimes, its not a good idea to ‘whale hunt’, and better progress can be made with smaller wins to gain traction, demonstrate successes to leadership, and ultimately gain buy-in to later tackle the major legacy application monsters.

Once completed, or at least well underway, most projects show a common split of applications by category. We usually see a significant amount of large legacy applications, up to 85% in some cases. Next we see 10 % or more applications that appear to be cloud ready with some retrofitting. Lastly we see the remaining being in flight, or to be developed applications that can be stood up as cloud ready from the start.

Perhaps not surprisingly, much of the heavy lifting required to become a ‘cloud ready’ organization involves organizational mindset. Cloud based applications ideally, are developed very differently than legacy applications. Thus IT groups need to rethink application development.

Legacy application development involved (or involves as much continues today) moving development through gates, and often times, a change review board. Some of this process may be required for regulatory reasons.   Some may just be in place ‘because that’s how we have always done things’.

We must tackle the process just as we tackle the applications themselves. If we don’t adopt a new mentality, a new paradigm, nothing will change. Yet change is painful, and time consuming. By showing quick early wins in both moving applications to the cloud, as well as rapidly iterating new applications, organizations will realize the burden of old onerous processes, and the advantages of rapid application development.   Change review boards can and do learn.   If they can be shown that development by making rapid, small, incremental change to code can actually reduce risk and exposure while increasing code quality, they will adopt the new paradigm. This can also lead to a fruitful recognition of the value in automating the build process, which will reap further rewards.

My goal here was to start the conversation around cloud, and replatforming in particular, while remaining platform and technology, and vendor agnostic. All major vendors have valid cloud options, some well-tailored to specific needs, and many are viable options regardless of the legacy application language. Stating a desire to become cloud-ready, is the beginning of a journey of discovery. It will lead (in time) to IT once again being invited to the table for broad strategy discussions, as they will have proven their worth by navigating this industry inflection point. We encourage you to begin this journey with your eyes and mind open.

Read the original article published on LinkedIn here.

You Are The Network

First there was the Stone Age. Then we learned how to manipulate and smelt metals, develop tools for hunting and farming implements, which led to an Agrarian age. From there, machines helped bring about the Industrial age, then the Space Age. So where are we now?
Shall we call it the Network Age?
Robert Metcalf co-invented Ethernet, the idea of a packet based, distributed network, and formulated Metcalf’s law, which states (paraphrased): “the value of a network, is proportional to the square of the number of connected users.” Or stated more simply, the utility of a connected ‘thing’ increases as more and more ‘things’ are connected. The telephone acts as an easy example. One phone by itself is a paperweight, but one million connected to the same network is immeasurably powerful.
We live in a day of increasing connectivity, and increasing expectations we have as a result. Think back to just a few years before the ubiquitousness of the internet. How did we look up information? A phone book for a phone number? An encyclopedia for random information? How did we we find more targeted information? How well did a movie perform on its opening weekend? Who won the academy award for best actress in 1980? What was the high temperature in Nagasaki yesterday? These questions required real research. Today, they require only a smart phone, tablet or any web connected device. Even a smart watch will provide you with answers in milliseconds. We expect this, therefore we ARE changed by this new reality.
In short, the very nature of a thing changes with connectivity. This applies to us as well. As people, we are changed as a result of our ready access to a near limitless supply of information, facts, anecdotes and even dumb cat memes on social media. It’s not just the availability of this information, it’s how we live our lives knowing we have this access.
I remember years ago, likely in the late 80’s, working with a customer and arguing for the utility of networking their office PCs. At the time, each person had their own printer and files were moved from desk to desk via floppy. I failed in making my case and, I’ll note, they were out of business a few short years later. I am not implying causality, but a company today that doesn’t see the change in the world around them will likely see a similar fate in their future.
No one of us can pretend to live, work, or play in isolation any longer. Whether we are a fisherman in southern China or an executive on wall street, we are connected to each other by virtue of our connections to technology.
By extension, the isolationist aims of some politicians simply fail to recognize our world today. Some foreign governments have tried to shut down some internet sites or censor access entirely. Some succeed, while many fail, while still others have been overthrown. Remember the Arab Spring? No, the Egyptian situation wasn’t the fault of internet censorship, but the power of a network community of people gave voice to a populace in a way not previously seen. In addition, this connectivity has brought about a profound change in many third world countries. To stop jobs from moving overseas one would have to shut down the Internet to prevent companies from using overseas labor.
As a result of this increased connection, this ease by which we can communicate across vast distances instantaneously, we have seen a gradual leveling of the playing field. In many of these lands, education levels are improving, as are employment opportunities, even while standards of living improve. With a good network connection, one can live anywhere, and work for anyone that doesn’t require onsite employees. This connectivity has fundamentally shifted the balance of power in the world, by increasing opportunities for everyone.
We can no more become isolationist than we can revert to a stone age society. Not one of us would stand for it. Life and history only move in one direction.
Speed
To fully grasp our connectivity, we must look at the speed of life today. How long did it take someone to research an article, paper, or book 50 years ago? How long did it take to travel to Europe 100 years ago and at what cost? How much did a long distance call cost only a generation ago? Today, anyone can make a video call with someone anywhere on the globe instantly, and at an effective cost of zero. How has this changed us?
Speed has a profound effect on our lives. From expectations of traffic when traveling or commuting, to responsiveness of websites when shopping, to knowing our bank balance in real time, at any time of day. Speed is an extension of convenience. Would you rather go to a mall to purchase a book today, or order one online from Amazon? Better yet, log on with your kindle and have it instantly. Retail bookstores, and ultimately retailers in general, began to close their doors with a few years of each expansion of Amazon’s offerings.
Similarly, in years past, we chose to live based on schools for our children and where we worked. Today, a growing number of people work remotely, while still more take advantage of online schooling. Speed, and our expectations of it, has changed everything.
At the same time, we always demand more speed. We are hardwired to be more efficient, even to be lazy. To do more, in less time, with less effort. Build a bigger, wider, highway and more traffic will find it. Increase the speed of a network, more people will use it. Unsure about this? Try driving through traffic in LA or Toronto.
Power
These changes have negative consequences as well as positive ones. Napoleon realized the value of the third dimension when mounting his armies. He recognized that air superiority could win the day, and so he introduced artillery and other air-born weapons not yet seen by his rivals. Similarly, the United States leveraged the air to save the day in World War I. In the 20th century, nations battled nations, and the winner, while both carrying superior resources, numbers and power, was ultimately the victor by recognizing a strategic advantage before their enemies.
Today, the nature of power has shifted, and world leaders face an ugly future if they fail to see it. So too must business leaders recognize this paradigm shift in the world and, by extension, in their customers, partners, and even employees. Amazon killed far larger retailers with their business model. Apple’s iTunes fundamentally changed the music industry, while Netflix forced the shuttering of Blockbuster stores. The common theme is seeing a different future brought about by connection. A world simply not possible prior to a widespread, universally accessed network. The power is in the network, and those who understand and leverage this, will rule the future.
Sadly, the ugly side of this is that many terrorist networks have figured out how to leverage networks and connectivity before our traditional world leaders. Just as military strength ruled the day in the previous century, network power and understanding it, will rule tomorrow’s world.
Value
At the same time, we must also recognize we are the sum of our connections. As the network grows, so grows our value. The tide is rising and so to are all the ships. Each additional point or connection to us, no matter how remote or small, increases the overall value of us and our collective networks.
Conclusion
As we look to the future, we must understand this is more than another ‘paradigm shift’ in our society. This amounts to a redistribution of power in ways heretofore unknown in human experience.
In days gone by, power was concentrated among the few. Military leaders, the clergy, and wealthy merchants controlled most of the worlds power up until the 17th century. The industrial age saw power slowly redistributing as capitalism took hold, and led to the rise of the middle class.
Today however, networks both concentrate power among those who control networks, while also distributing it to users. More power has been placed in the hands of ‘everyman’ than ever before.
Networks are also made up of many complicated pieces. Routers, servers and switches make up the technical backbone. These are complicated items, but predictable and understandable. Together, they make up a complex system. Complex things are randomized, and unpredictable. A car is complicated, but predictable (at least to some extent). Traffic is complex. Both have complicated pieces, but complex things are more unpredictable. Like the weather, ocean currents, or storms.
In addition, complex systems lead to the creation of things previously unfathomable. Think about LinkedIn, Facebook or Snapchat as contemporary examples. Without their network of users, they are single web pages. They are nothing. Similarly, Uber, Airbnb, and other examples have led to the creation of new systems previously unimaginable. These businesses have also arisen with remarkable speed, and created immense riches for their founders.
Sadly, terrorist groups have also formed upon these network backbones. Isis emerged from this complexity. This process of creation of the unimaginable, is only just accelerating.
Networks contain enormous power at their core. To control such a system is arguably to control anyone connected to it, or at least to dramatically influence those users or connections. When we do a Google search, do we trust the results implicitly or second guess them?
Today’s networks are increasingly led by a young, technically savvy group of Technorati with limited experience with our world history, its politics or philosophy. Yet our world is led by a group of leaders with no experience with these new networks. We cannot go back, we can only look forward.
To quote Joshua Cooper Ramo, author of ‘The Seventh Sense’, who does a wonderful job of summarizing this new world order, “One thing is clear. If we are going to play a role in shaping our world. We don’t have much time.”
And remember, you are the network.

Oracle Announces End of Support For JD Edwards EnterpriseONE Technology Foundation

Long title. What’s this about?
In short, back in 2010, Oracle announced the withdrawal of JDE EnterpriseOne Technology Foundation. The final nail in this coffin comes on September 30, 2016, when technical support officially ends.
What this means is that for many customers (and I’ll get into particulars shortly) there’s a requirement to make a critical decision to either move to Oracle’s Red Stack, or procure new IBM licenses in order to remain on IBM’s Blue Stack.
There’s a variety of customers running the stack, and nearly as wide a variety of options for how companies may have deployed their JDE solution. WebSphere with DB2 is the original and most common. WebSphere with Oracle as the backend is another common combo. And there’s a variety of other blends of supported web/application servers, database servers and related middleware.
Regardless of the configuration, in most cases, these products were part of the bundled solution that customers licensed from Oracle, and now a decision point’s been reached.
This doesn’t mean Oracle’s dropping support for IBM products. This does mean there’s a change in the way they’re licensed.
So what is “Technology Foundation”?
To quote from Oracle’s documents verbatim: Technology Foundation is an Oracle product that provides license for the following components:

  • JD Edwards EnterpriseOne Core Tools and Infrastructure, the native design time and runtime tools required to run JD Edwards EnterpriseOne application modules
  • IBM DB2 for Linux, Unix, and Windows, limited to use with JD Edwards EnterpriseOne programs
  • IBM WebSphere Application Server, limited to use with JD Edwards EnterpriseOne programs
  • IBM WebSphere Portal, as contained in JD Edwards EnterpriseOne Collaborative Portal

Technology Foundation is also referred to by the nickname “Blue Stack.”
If your license for JD Edwards EnterpriseOne applications includes an item called “Technology Foundation” or “Technology Foundation Upgrade,” this affects you.
If there are any other terms like “Oracle Technology Foundation,” then this change does NOT affect you. This is also different then the foundation for JD Edwards World.
So what now? In short, if you have Blue Stack, you should contact TxMQ or IBM immediately to acquire your own licensed products to continue to run your Oracle solution. TxMQ can offer aggressive discounts to Oracle customers subject to certain terms and conditions. Contact us for pricing details. We can help with pricing, as well as with any needed migration planning and implementation support.
Contact chuck@txmq.com for immediate price quotes and migration planning today.
Image from Håkan Dahlström.

How Do I Support My Back-Level IBM Software (And Not Blow The Budget)?

So you’re running outdated, obsolete, out-of-support versions of some of your core systems.? WebSphere MQ maybe? or WebSphere Process Server or Datapower…the list is endless.
Staff turnover may be your pain point – a lack of in-house skills – or maybe it’s lack of budget to upgrade to newer, in-support systems. A lost of times it’s just a matter of application dependencies, where you can’t get something to work in QA, and you’re not ready to migrate to current versions just yet.
The problem is that management requires you to be under support. So you get a quote from IBM to support your older software, and the price tag is astronomical – not even in the same solar system as your budget.
The good news is you do have options.

We were able to offer a 6-month support package, which eventually ran 9 months in total. Total cost was under $1,000 a month.

Here at TxMQ, we have a mature and extensive migration practice, but we also offer 24×7 support (available as either 100% onshore, or part onshore, part offshore) for back-level IBM software and product support – all at a fraction of IBM rates.
Our support starts at under $1,000 a month and scales with your size and needs.
TxMQ has been supporting IBM customers for over 35 years. We have teams of architects, programmers, engineers and others across the US and Canada supporting a variety of enterprise needs.
Case Studies
A medical-equipment manufacturer planned to migrate from unsupported versions of MQ and Message Broker. The migration would run 6 to 9 months past end-of-support, but the quote from IBM for premium support was well beyond budget.
The manufacturer reached out to TxMQ and we were able to offer a 6-month support package, which eventually ran 9 months in total. Total cost was under $1,000 a month.
Another customer (a large health-insurance payer) faced a similar situation. This customer was running WebSphere Process Server, Ilog, Process Manager, WAS, MQ, WSRR, Tivoli Monitoring, and outdated DataPower appliances. TxMQ built an executed a comprehensive “safety net” plan to support this customer’s entire stack during a very extensive migration period.
It’s never a good idea to run unsupported software – especially in these times of highly visible compliance and regulatory requirements.
In addition to specific middleware and application support, TxMQ can also work to build a compliance framework to ensure you’re operating within IBM’s license restrictions and requirements – especially if you’re running highly virtualized environments.
Get in touch today!
(Image from Torkild Retvedt)

Integrating The Energy Grid: Smart Grid Advantages And Challenges

Over the past few decades we’ve gotten really good at building secure, robust, scalable, smart computer networks. We’­­ve mastered quality-of-service to prioritize certain types of data and we understand rules-based routing. Not only can we track users by sign-on and IP addresses, but we can also validate authorizations for access against LDAPs. We’ve begun to look at enterprise architecture in a more holistic way, moving past the business silos so common in the 1980s and 1990s.
Then there’s our utility infrastructure.
U.S. utility grids are arguably the most advanced invention of the past century. Their technology and ingenuity surpass the smart phone, moon landing and even the Internet. What would negatively impact our lives more – losing power or water, or losing the Internet? (Let’s not include the 15- to 25-year-olds in this survey).
Yet, our nation’s utility grids remain stuck in the 1960s. Aging generation facilities, outdated power stations and substations, and above-ground utility lines are in a constant state of repair. We have limited visibility into real-time information, let alone predictive information. We always wait until there’s an outage or failure. There’s widespread realization that a smart grid is a competitive advantage we need as a nation, yet has resulted in few accepted standards. The destination is still so far away and, honestly, it sometimes seems like we have yet to even begin the journey.
Perhaps the most widely accepted route follows the path already taken in IP networking. Let’s not reinvent the wheel when designing our smart grid. We can leverage the advancements already in place by looking at the utility grid in the same way we look at enterprise technology infrastructure. If we use the same architectural standards and benchmarks – the same principals of governance and compliance – our journey to a smart grid nation could be halfway over.
The Benefits
Our electric grid has experienced shocks to the system never envisioned by the early power-generation pioneers. Power flowed one way, and only one way. Generation stations sent power to the distribution grid, which residences and businesses consumed. But today’s power hardly flows so simply. Now we have home solar arrays, electric cars and home energy storage systems. Our grid must now handle the two-way flow of power. Our grid must be smarter.
There’s a growing acceptance that we must manage this reality in careful ways, minimizing the location and quantity of these direct generation (DG) locations to 20% per zone. But even this isn’t a universally accepted axiom.
Utilities are widely accepting of smart grid benefits. The ability to see near-real-time data on demand and energy flow is net-new information. However, historically, the meter reader would report monthly, or every other month, on meter-read data. Not only was real-time inconceivable, but so was increasing the amount of data generated by cycle reads. This slow, archaic way of reading meters isn’t up to speed with today’s digital economy.
But smart grids are.
Today, smart meters are typically read up to four times an hour, or every 15 minutes. This turns a single data point from the 1980s into 3,000 data sets today! And that’s just per meter, per month! Where do we put this data, and what can we do with this information other than drowning in it? (In other words, how can we take a sip of water form a fire hose?)
Consumer Benefits
In addition how a smart grid benefits utilities, it provides countless benefits to us as consumers. First, a smart grid smoothes out the flow of power, nearly eliminating brownouts, blackouts and surges.
Second, consumers now have more control over their power usage. Home energy management systems (HEMS) can adjust usage so you can schedule your energy-intensive tasks, like laundry or charging a hybrid vehicle, during off-peak times. Energy is much cheaper when electricity is less in demand. On top of that, it also facilitates quicker troubleshooting when things do go wrong.
And of course, the benefits to the environment are well documented. Better energy management means less carbon emissions, cleaner waterways and less reliance on fossil fuels.
The Integration Challenge
In time, we will have a smarter grid. What remains today is an IT challenge ­– an integration challenge. Legacy systems can’t simply be taken offline just because there are better options out there. Advanced Metering Infrastructures (AMI) can’t simply be slapped on to an aging infrastructure. We need a smarter solution for integrating the disparate systems and technologies. We can propel yesterday’s analog, manual meter reads (ones with limited deployed technologies) to the smart systems now coming online.
It’s All About the Data
TxMQ has partnered with IBM to pioneer a Smart Grid Smart Business solution (SmartGridSB) for utility companies to better manage the growing flood of data generated today. Better dashboards, mobile connectivity and more actionable information at your fingertips gives today’s utility managers the power to deliver the results consumers insist on and public-utility commissions demand.
TxMQ’s smart consulting and staff augmentation solution teams are trained in complex integration challenges and strategic consulting. We can bridge the gap between AMI and smart grids by demonstrating complex ROI calculations for end-of-life equipment requiring newer alternatives.
Call or write today to learn how you can partner with TxMQ to deliver our SmartGridSB solution to your users and customers.
(Image from Oran Viriyincy)
 

America Needs An Education On Software Asset Management (SAM)

I recently had the privilege of attending (and co-sponsoring) the IBSMA SAM Summit in Chicago with some colleagues. It was a fantastic event with great sessions, a wonderful format and venue and amazing networking opportunities.   Representatives were in attendance from all of the major software vendors and many tool companies, alongside SAM consultancies like TxMQ.
What I noticed right away, though, was the skewed attendance. It’s wonderful seeing so many foreign firms travel thousands of miles to attend a conference in the US, but I’m really surprised by the lack of American and Canadian firms in attendance.
I have a theory I’ve been forwarding on why. Like many of my theories, this one’s based on a limited sampling of statistically insignificant data sets. So please give me a lump or two of salt for starters.
First, some contextual background: It’s clear to any informed American that we, as a nation, excel at many things. We eat well, spend well, vacation well, enjoy the finer things in life when we can afford them (and oftentimes when we cannot), and we love kicking problems down the road. Denial is more than an art form. It’s a social science.
Social Security reform? Not my problem – let future generations deal with it.   National debt? Please. My kids and grandkids can pay that off. The environment? Fossil-fuel consumption? Hardly seems to be an issue for my generation.
And US management is too often focused on putting out fires, instead of building fireproof things. So it shouldn’t have been a surprise to see so few American firms interested in understanding and investing in compliance improvement and best practices.
We must work to change the culture of America at a macro level, that much is clear. But we can all work today to change the culture of our workplaces to embrace SAM and declare it a must-do effort – not a future “nice to do if we get audited” thing.
Software Asset Management should NOT be undertaken as an audit-defense practice, but as a part of an overall corporate strategic leadership. Corporate best practice should be to have a tightly integrated leadership organization that includes a SAM leader alongside corporate-compliance officers, security officers and financial overseers.
From software-renewal-agreement negotiations to better alignment between software usage and needs, SAM brings tremendous goodness to organizations.
I’ve written separately on much of the value of SAM, as have many others, so I won’t get into a deep-dive here. But I will say again that a well-run company, with a solid SAM program, delivers greater value to its shareholders by:

  • Minimizing waste (like unused software and entitlements)
  • Maximizing efficiency (by limiting the wasted time replatforming out-of-compliance software or applications)
  • Creating a more positive environment for stakeholders (there’s less stress and worry because there’s less uncertainty and confusion around assets and their allocation or disposition)

Let’s all do our part to help educate our workplaces on SAM as a necessary part of corporate governance and leadership. I’m ready to start the conversation: mailto:chuck@txmq.com.

MQ, The Digital Economy & You

IBM MQ continues to evolve to meet the expanding needs of the digital economy. We encounter many organizations that have yet to take full advantage of the capabilities they’ve invested in their MQ backbone. Learn about the new MQ physical appliance, as well as the MQ virtual appliance (available exclusively from TxMQ). Other topics include secure transfer, cloud messaging and more.

About Software Audits And Software Asset Management

Gartner, Inc. expects a four-fold explosion in the adoption of Software License Optimization & Entitlement (SLOE) solutions worldwide.
In the 12 months prior to September 2014, software-license audits reached an all-time high. It’s estimated that organizations now have a 68% chance of being audited by at least one software vendor this year (up from 54% in 2009).
Gartner advises it’s not whether a software vendor such as IBM will audit you, but when it’ll happen.
TxMQ has completed a number of Software Asset Management engagements and audit-remediation projects. There’s no way to avoid the reality that audits are painful exercises. The amount of staff time and resources that companies expend to get through audits is staggering. Even a small company can spend many thousands of employee hours and pay fees to partner firms like TxMQ – all while auditors, lawyers and accountants get their pounds of flesh.
The reality is that managing software usage and entitlements is an incredibly complex process for companies – a lot like managing your own personal taxes. All major vendors like IBM, Microsoft, Oracle, Adobe, VMware and the rest operate with increasingly complex rules around virtual usage, cloud usage and non-production usage.
Our philosophy at TxMQ is this: Most of us have a tax planner or accountant to handle our personal taxes (and audits if they should happen). Companies should use a partner to manage their software-asset entitlements, usage and compliance.
In all likelihood you’ll be audited by one of your vendors. How prepared are you to come out squeaky clean? What can you do to minimize the audit pain points. How can you prevent a re-audit? If you have a partner firm that’s responsible for reporting software usage and entitlements, maintaining clean records and handling “true ups” for overages, any audit will be a breeze.
You wouldn’t go through an IRS audit without your accountant. Do you really think it makes sense to go through a software audit without a partner on your side?
TxMQ works with companies to put a Software Asset Management (SAM) strategy together to help mitigate the risk and exposure you face. We have options from full management all the way down to as-needed license reviews. There’s an option to fit every situation and need. Most companies prefer a managed-service solution, whereby TxMQ actively engages to manage software usage, but also puts in place best practices for change management, SDLC compliance and more.
In many cases, TxMQ uses tools like IBM EndPoint Manager, which further extends a company’s capabilities to:

  • Cut costs and downtime while securely managing datacenters and distributed servers
  • Reduce cost, risk and complexity of managing desktops, laptops and other devices
  • Ensure continuous security and compliance and keep companies out of the negative news
  • Maintain audit readiness and continuous license compliance with always-on Software Asset Management
  • Manage BYOD policies and the mobile enterprise

There’s no downside to a conversation. Let TxMQ begin with a no-obligation discovery call to review situation and help you put a plan in place to minimize your compliance exposure. Contact us before the auditors contact you.
(Image by Simon Cunningham)
 

MQTT – Valuable Then and Now

In 1999, Dr. Andy Stanford-Clark of IBM and Arlen Nipper invented a simple messaging protocol designed for low bandwidth, high-latency networks. They named it MQ Telemetry Transport, better known as MQTT. This pub/sub, lightweight protocol adds a heightened security element to messaging via highly unreliable networks. It requires minimal network bandwidth and device resources, while ensuring MQ’s noted reliability and assured delivery standards.
What makes MQTT still valuable today? This protocol has dramatic implications and growing use cases for the Internet of Things (IOT), where the world of connected ‘things’ connects an endless variety of devices, sometimes with minimal power availability. In other words, as all these devices “talk” via the Internet, the MQTT  transport protocol ensures that these conversations stay secure and private. In order to understand how MQTT can improve your company’s ability to navigate the digital economy, you’ll need to get more acquainted with the nuts and bolts of the protocol.
Standards: While in the process of undergoing standardization at OASIS, the protocol specification is openly published and is royalty free.
SCADA and MQIsdp: SCADA Protocol and MQ Integrator SCADA Device Protocol are both old names for what is now known as MQTT.
Security: You can pass a user name and password with a MQTT packet in V3.1 of the protocol. Encryption is independent of MQTT and typically handled with SSL, though this does add network overhead. Keep in mind there are other options outside of the base protocol.
Implications and use cases: One of the oft-cited use cases, likely due to the underlying popularity of the application, is the Facebook messaging application.
When Facebook engineer Lucy Zhang was looking for a new messaging mechanism for their app, she knew she had to address bandwidth constraints, power consumption and platform variety (iOS, Android, Windows, etc). She turned to MQTT. While a truly innovative use for the protocol, this type of messaging isn’t the most typical use case.
M2M: MQTT’s true power lies in machine-to-machine communication. It’s specification to cover device communications enables any device to communicate information to any other system or device. Smart meters are an excellent example of an MQTT use case. This lightweight messaging protocol excels in communication among multiple sensors, often in remote locations, with limited power and inconsistent network connectivity.
In the case of smart grids, utilities companies can use MQTT to better predict where crews need to be in advance of weather events. In addition, transportation authorities can monitor road conditions to modify routes in advance of storms or accidents and detour commuters around construction sites.
Conclusions: MQTT has only recently come into its own as a mature, supported, reliable transport protocol to enable communication in a truly connected world – a world where meters can feed their data into analytics systems, combining with other data like weather information or social media trends, to perform predictive analytics.
TxMQ is working with a number of companies on next generation use cases for MQTT that better drives the digital economy, improves outcomes in healthcare, enhances lives and improves our planet for all of us.
Get in touch today for information on how we can partner with you on your digital evolution.
Chuck Fried is the President and CEO of TxMQ, a Premier IBM Business Partner supporting customers in the US and Canada since 1979.   chuck@txmq.com

The Value (Or Lack Thereof?) Of User Groups

For the past 25 years I’ve played an active role in a variety of information-technology user groups. Back in the 1980s, we called these special interest groups (SIGs). I started off with several Delphi groups and then, as my interests evolved, moved on to Paradox. After that I participated in enterprise groups centered on IBM products and open-source technologies.
Then something happened – The Internet.
When I first went online in the late 1980s with Compuserve, it wasn’t easy. I predate AOL, and though I never did find much use for it, I don’t judge those that did (or still do). I ran a few SIGs on Compuserve, or CIS, as we called it (Compuserve Information Service).
Then, as more and more people and companies moved online, a trend started to develop – fewer and fewer users attended SIGs in person. I moderate groups in the US and Canada covering mostly, but not exclusively, IBM products. I also assist IBM in building up its user-group participation. In addition, I attend SalesForce user-group meetings, since I’m both a user and my firm also does some SFC consulting and integration work.
Across the board, there has been a marked decline in user-group attendance for cities both big and small. I notice the difference when I attend and run groups in larger metrocenters like New York City, Toronto and Chicago, along with smaller cities like Buffalo, Cleveland and Pittsburgh. This summer, I hope to attend some in Europe and be able to glean some insight there.
The attendance decline brings about many challenges. I’m involved with one group that struggles each year to spend (that’s right – spend) the money accumulated through corporate membership fees. Another nagging concern is getting users to present at these meetings. The issue isn’t with the speakers, but rather with their employers’ legal teams. Many times legal restrains employees from presenting, which is a real problem when it comes down to the value of these sessions. There’s no shortage of vendors and manufacturers (of hardware and systems) who like to speak at user groups. That means these meetings can end up turning into sales pitches and commercials for software and systems. That defeats the whole point.
There’s no measure to the value of engaging with peers who face the same daily industry and workplace issues. Not only have I made priceless professional connections, but I’ve also formed lifelong friendships because of these groups. I’m never at a loss for a peer to bounce ideas off, ask  a quick question to or explore an innovative deployment tactic. The purpose of user groups is to share experiences, successes and pains. We can speak candidly on challenges we’ve faced and how we’ve resolved them. And yes, occasionally vendors step in to discuss coming releases, new features and use cases for their implementation.
Of course, user groups aren’t limited to just IT. As a weekend-warrior fitness junkie, I’m also involved in several running and triathlon clubs. As a result, I’m never at a loss for friends to join me on a long run in the evening or a lunchtime run during a sunny workday. Every group I participate in makes me better, stronger and more effective at whatever I’m focused on, be it professional or personal.
We are a tactile species. In our digital world, we forget the power of meeting people face-to-face. True, we can accomplish great feats behind our screens and phones, but forming trusting, long-term friendships usually isn’t one of them.