Upgrading For Federal Reserve Bank MQ Integration

For financial institutions using WebSphere MQ V6.x or older to communicate with the Federal Reserve Bank, your time is running out… and fast. WebSphere MQ V6.x went End of Support on September 30th, 2012 and the Federal Reserve Bank is requiring upgrades to a current, supported version.

WebSphere MQ V7.0.0 and V7.0.1 went End of Support at the end of September 2015. For those of you wondering about WebSphere MQ V7.1, you might want to reconsider. While still technically supported by IBM, V7.1 is likely looking at another upgrade within the next 18 to 24 months.
From a business perspective, I recommend financial institutions upgrade to IBM MQ V8.x. At the very least, you could consider MQ V7.5.x.

IBM MQ V8.x, rebranded from WebSphere MQ is now full equipped with fix pack 3 and has proven to be very stable.

Further, MQ V7.5.0.5 and MQ V8.0.0.3, have deprecated SSLv3 connections due to the Poodle Exploit and reduced the number of supported Cipher Suites. (It is critical you understand the supported connection protocols and Cipher Suites supported by the Federal Reserve Bank.)
But MQ upgrades are only half the story.

If you haven’t been keeping pace with MQ updates, you’ve likely not been keeping pace with OS either. OS updates to supported versions are arguably equally as critical as MQ.

Interested in learning more about your options? Let’s start a conversation. Reach out to TxMQ today.

Why Everyone Should Invest In An ITSM Tool

[et_pb_section][et_pb_row][et_pb_column type=”4_4″][et_pb_text admin_label=”Text”]
We’ve all heard the question before: “Should we invest in an IT Service Management Tool?”
The simple answer is yes. There’s really no counterpoint. Small, midmarket and enterprise organizations will benefit greatly from purchasing and then leveraging an IT Service Management Tool (ITSM) tool.
What Is IT Service Management (ITSM)?
At a high level, ITSM is the backbone of your IT organization. It’s the teams, groups and departments that handle the front-facing communication and support of your IT organization. They’re the ones that receive support requests and provide them to your backend teams, developers, etc. Think about them as the face of your IT organization. They need a management tool to do their jobs effectively.
What An ITSM Tool Can Do For You
There are many ITSM tools out there such as HP Service Manager, Remedy, Service Now, IBM Control Desk, C2 Atom and many more. Each offers its own user interface and reporting structure. Some have additional add-on tools and features or different levels of packages to support your unique needs. No matter the tool you select, the majority will at minimum come with a configuration management database (CMDB) as the backend database for your tool, as well as a basic ticketing system. Both of those tools are critical to the business, so you’re already winning, because your requests and your assets are being tracked in one tool. You can easily escalate and assign tickets for support or enhancements and do some basic reporting as well as track your assets. At a minimum you’ve just saved time and resources by streamlining your ticketing process.
Is that enough to write a use case and convince your company to look at investing in an ITSM tool. Maybe not. But it’s doesn’t stop there. We all know that IT changes, software changes and upgrades need to be put in, and service managers need to track these changes and/or obtain approval. We also need to make sure we’ve properly documented backout plans to ensure there are no conflicting changes happening during the same window. An ITSM Tool can do this for you. The change-management system in most ITSM Tools can automate your change-request process with enhanced questions that can assess the risk of the change and send automatic approval notifications to impacted parties utilizing your flashy new CMDB to get information on who owns the system or utilizes the system and who may be impacted by the change.
What’s so great is that it saves your change information and backout plans for future reference and knowledge sharing. Some even have an integrated change calendar that will show you any overlapping changes or maintenance windows that may impact your change. You’ll also be able to relate a change record to an incident ticket if additional support is needed during the change or if the change causes an outage. This is a more effective way to track any trending or knowledge needed for future changes.
Most ITSM tools also offer a knowledge base as on out-of-the-box option, because knowledge sharing and transfer is key to successful service management. The ability for a developer or network engineer to provide relevant information back to the service desk in a searchable format can increase your first-call resolutions (FCRs), or the time it takes to identify how to escalate an issue. The knowledge base can also be utilized to share knowledge to your user community with basic troubleshooting or automated support for frequently asked questions, issues or known issues with workarounds. This will in turn reduce the numbers or reoccurring calls to your service desk for issues that can be easily resolved by the user, and will free up your service desk analysts to handle more technical requests.
The above-mentioned features – CMDB, ticketing tool and knowledge base – are your basic features of an ITSM Tool. But there are other out-of-the-box functions, plus additional add-ons you can purchase to serve other business needs. These can include trending analysis, reporting/metrics, software-asset management, hardware-asset management, project-portfolio management, event-management integration, self-service portal, automated workflows, SMS escalations or phone-calling tree automation, and application-programming interfaces (APIs) that integrate with other systems to read from or write to the ITSM tool.
Why Do We Need An ITSM Tool?
Look at your IT organization and think for a moment of the services you provide. You most likely have some sort of request process for the service desk via email, phone, instant message or even web requests.
How do the service agents handle these requests? How do they document and resolve these requests? What happens if the request needs to be escalated?
The process you have in place probably works as requests are handled, problems get resolved and that guy on the 3rd floor who wanted a new laptop eventually got one. So why would you need an ITSM tool if everything is great and it works? Don’t fix it unless it’s broken, right? Wrong.
Even if your process seems like it’s working, is it really? Are you tracking changes? Can you easily provide trending analysis on common issues? Do you have a CMDB that stores your people, processes, assets and the lifecycle for them? Are your requests being escalated and turned around in an acceptable service level agreement (SLA)? How are work efforts prioritized? What happens when an outage occurs? Are teams notified? Is the outage documented and follow up on? How many different systems/applications are you utilizing to ensure these efforts happen? How much time, effort, support and money are you spending on these systems/applications to provide the basic functionality of requesting IT services?
Investing in an ITSM Tool will almost pay for itself simply by reducing the cost associated with support, time, resources and reoccurring outages. It’ll enable you to streamline your support process and even automate some of your manual tasks, like tracking, metrics reporting, and communicating about the services you provide to the organization.
Purchasing An ITSM Tool Vs. Building An In-House Tool
Let’s say you decide that an ITSM tool will absolutely help your organization. The purchasing cost is now under review, but you have a team of developers on the payroll that might have some availability to take on a project and produce an in-house ITSM solution. Here are some of the pros and cons to consider before building the tool in-house.
Pros:

  • Everything’s done in-house
  • You don’t need to spend any money up front to acquire a product
  • There’s no licensing
  • Your dev team knows how to support it
  • It’s customized to your specific needs

Cons:

  • Your developers are being paid to work on this project when they could be doing other production development
  • As your environment changes, your in-house solution will need to be updated, which will eat up more development time
  • If your solution is web-based and browsers, scripts and other plugins are updated, it may not work as intended and require more development
  • Knowledge transfer of the tool and how it was developed needs to be documented. If your developer leaves, the next developer must be able to support or upgrade the app
  • You may need to write code to integrate other applications such as email or phone into your app. As those systems are upgraded, the code may need to be revised
  • Requirements for the app may change as the organization matures or grows, which will consume additional development time
  • If and when the app reaches the end of its lifecycle, there’s no support or upgrade options readily available
  • There’s no CMDB, unless your team plans on developing one
  • The system of record will not be easily transferrable to another system of record if needed in the future

These are high-level pros and cons, but each organization will have more specific and customized lists depending on the functionality and requirements needed. Given all the cons, why not let someone else who’s already invested time and resources do the work for you? The tools out there are robust, and some are open for additional customization or in-house development to fit your specific needs. There are also additional support options for these tools to assist your organization when issues arise or during implementation.
Don’t waste your resources or time trying to reinvent the wheel when someone’s already invented one and enhanced it.
Original image by Max Max
 
[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section]

SharePoint And Why You're Probably Using It Wrong

So you have SharePoint. You acquired it through a package you purchased with other Microsoft products, or you heard about it from someone and decided to stand it up and see what it can do. Either way you spent some time, resources and much-needed network capacity to put this in place.  Now what? That’s a question many organizations ask, and if you’re not asking this question you’re probably still using Sharepoint wrong. Let me explain why.
Many of the organizations I’ve spent some time with have SharePoint. Most have the Foundations version and have no idea why they would pay for the Enterprise license. Foundations is still a strong version and can be utilized to reduce company expenditures on other vendors for products such as hosting your intranet or conducting surveys, as a few examples. I’ve seen this time and time again.  A company has an external vendor that hosts its intranet. The design elements are minimal and the cost associated with development of a product that can integrate with the organizations email client or other applications can be costly.  Why would you spend that time and money when you have the capabilities and product sitting on your network not being utilized to its minimal potential? SharePoint can be your front-facing intranet/extranet site. It can be your employee daily landing page with links to tools, web-hosted applications, announcements, statistics, documents, pictures, knowledge, reports, presentations, surveys, and more.
Think about it for a moment: You probably have a team portal setup for each department or some of your departments.  It’s probably a basic SharePoint template with an Announcement section, Document Repository, Calendar, maybe a fancy logo and a tab at the top to go to the parent site. If this sounds like you, then you’re using Sharepoint wrong.  Remember, SharePoint’s a tool that has many capabilities.
With the basic features offered through SharePoint Designer and the default page and web part templates, you can customize each portal, page and web part to fit many of your business needs without spending money on development.  You don’t need a web developer to manipulate multiple lines of code to embed a video on your page or customize the layout.  You can assign rights to individual teams and with little training they can be off and running on their own – now designing portals specific to their function and needs. I’m not saying go and fire your web developers.  I am saying you can utilize the functionality of SharePoint so your web developers can focus on other projects. You can code pages in SharePoint and design web applications, custom API calls and external facing sites.  So keep those web developers around.
Now that I have you thinking about what you can use SharePoint for, let’s talk about why you might consider the Enterprise license. The first thing I think of when someone asks about the Enterprise license is Workflows. Workflows can be designed to do many, many, many, many automated things. Let’s say you have a employee-engagement survey.  You want to know how your employees feel about the organization or an application that just went live.  You use SharePoint and create a really cool survey that changes the questions based on the previous answers, then take that information and add it to a live, up-to-the-minute graph on your main page. How do you do that? Answer: Workflow.
Maybe you have a form that needs to be filled out, and when someone submits the form, an email needs to be sent to a group for review. How do you do that? Answer: Workflow.  If you haven’t already guessed why the Enterprise license is useful, the answer is: Workflows.
Another thing that comes to mind when someone asks about the Enterprise License is MS Office integration. Yes, I said it. MS Office Integration. It delivers the ability to collaborate on those projects or documents right through SharePoint, or create awesome Visio diagrams on your main page.  Maybe you really wanted to use an Access Database for something and need to easily query the results in a list. I’m here to tell you that SharePoint Enterprise license has MS Office integration.
A few other features you’ll miss without the Enterprise License include business intelligence, robust search features, custom social-media-style profile pages, more design elements, scorecards, dashboards and a better mobile experience.  All versions of SharePoint have Android and IOS support, however, I’ve found the Enterprise version has more features for navigation that work better with the mobile devices.
If you’re not already preparing a use case for SharePoint, and an argument for why you should upgrade your license, then you really should get out there on the Internet and browse some additional topics.  Check out what other companies are talking about.  Really think hard about why you have this product in your environment you’re not doing anything with. There are many resources available to help you start your SharePoint journey.  Why not start it today?
Art work provided by John Norris

Why ETL Tools Might Hobble Your Real-Time Reporting & Analytics

Companies report large investments in their data warehousing (DW) or Business Intelligence (BI) systems with a large portion of the software budget spent on extract, transformation and load (ETL) tools that provide ease of populating such warehouses, and the ability to manipulate data to map to the new schemas and data structures.

Companies do need DW or BI for analytics on sales, forecasting, stock management and so much more, and they certainly wouldn’t want to run the additional analytics workloads on top of already taxed core systems, so keeping them isolated is a good idea and a solid architectural decision. Considering, however, the extended infrastructure and support costs involved with managing a new ETL layer, it’s not just the building of them that demands effort. There’s the ongoing scheduling of jobs, on-call support (when jobs fail), the challenge of an ever-shrinking batch window when such jobs are able to run without affecting other production workloads, and other such considerations which make the initial warehouse implementation expense look like penny candy.

So not only do such systems come with extravagant cost, but to make matters worse, the vast majority of jobs run overnight. That means, in most cases, the best possible scenario is that you’re looking at yesterday’s data (not today’s). All your expensive reports and analytics are at least a day later than what you require. What happened to the promised near-realtime information you were expecting?
Contrast the typical BI/DW architecture to the better option of building out your analytics and report processing using realtime routing and transformation with tools such as IBM MQ, DataPower and Integration Bus. Most of the application stacks that process this data in realtime have all the related keys in memory –customer number or ids, order numbers, account details, etc. – and are using them to create or update the core systems. Why duplicate all of that again in your BI/DW ETL layer? If you do, you’re dependent on ETLs going into the core systems to find what happened during that period and extracting all that data again to put it somewhere else.

Alongside this, most organizations are already running application messaging and notifications between applications. lf you have all the data keys in memory, use a DW object, method, function or macro to drop the data as an application message into your messaging layer. The message can then be routed to your DW or BI environment for Transformation and Loading there, no Extraction needed, and you can get rid of your ETL tools.

Simplify your environments and lower the cost of operation. If you have multiple DW or BI environments, use the Pub/Sub capabilities of IBM MQ to distribute the message. You may be exchanging a nominal increase in CPU for eliminating problems, headaches and costs to your DW or BI.

Rethinking your strategy in terms of EAI while removing the whole process and overhead of ETL may indeed bring your whole business analytics to the near-realtime reporting and analytics you expected. Consider that your strategic payoff. Best regards in your architecture endeavors!
Image by Mark Morgan.

Oracle Announces End of Support For JD Edwards EnterpriseONE Technology Foundation

Long title. What’s this about?
In short, back in 2010, Oracle announced the withdrawal of JDE EnterpriseOne Technology Foundation. The final nail in this coffin comes on September 30, 2016, when technical support officially ends.
What this means is that for many customers (and I’ll get into particulars shortly) there’s a requirement to make a critical decision to either move to Oracle’s Red Stack, or procure new IBM licenses in order to remain on IBM’s Blue Stack.
There’s a variety of customers running the stack, and nearly as wide a variety of options for how companies may have deployed their JDE solution. WebSphere with DB2 is the original and most common. WebSphere with Oracle as the backend is another common combo. And there’s a variety of other blends of supported web/application servers, database servers and related middleware.
Regardless of the configuration, in most cases, these products were part of the bundled solution that customers licensed from Oracle, and now a decision point’s been reached.
This doesn’t mean Oracle’s dropping support for IBM products. This does mean there’s a change in the way they’re licensed.
So what is “Technology Foundation”?
To quote from Oracle’s documents verbatim: Technology Foundation is an Oracle product that provides license for the following components:

  • JD Edwards EnterpriseOne Core Tools and Infrastructure, the native design time and runtime tools required to run JD Edwards EnterpriseOne application modules
  • IBM DB2 for Linux, Unix, and Windows, limited to use with JD Edwards EnterpriseOne programs
  • IBM WebSphere Application Server, limited to use with JD Edwards EnterpriseOne programs
  • IBM WebSphere Portal, as contained in JD Edwards EnterpriseOne Collaborative Portal

Technology Foundation is also referred to by the nickname “Blue Stack.”
If your license for JD Edwards EnterpriseOne applications includes an item called “Technology Foundation” or “Technology Foundation Upgrade,” this affects you.
If there are any other terms like “Oracle Technology Foundation,” then this change does NOT affect you. This is also different then the foundation for JD Edwards World.
So what now? In short, if you have Blue Stack, you should contact TxMQ or IBM immediately to acquire your own licensed products to continue to run your Oracle solution. TxMQ can offer aggressive discounts to Oracle customers subject to certain terms and conditions. Contact us for pricing details. We can help with pricing, as well as with any needed migration planning and implementation support.
Contact chuck@txmq.com for immediate price quotes and migration planning today.
Image from Håkan Dahlström.

How Do I Support My Back-Level IBM Software (And Not Blow The Budget)?

So you’re running outdated, obsolete, out-of-support versions of some of your core systems.? WebSphere MQ maybe? or WebSphere Process Server or Datapower…the list is endless.
Staff turnover may be your pain point – a lack of in-house skills – or maybe it’s lack of budget to upgrade to newer, in-support systems. A lost of times it’s just a matter of application dependencies, where you can’t get something to work in QA, and you’re not ready to migrate to current versions just yet.
The problem is that management requires you to be under support. So you get a quote from IBM to support your older software, and the price tag is astronomical – not even in the same solar system as your budget.
The good news is you do have options.

We were able to offer a 6-month support package, which eventually ran 9 months in total. Total cost was under $1,000 a month.

Here at TxMQ, we have a mature and extensive migration practice, but we also offer 24×7 support (available as either 100% onshore, or part onshore, part offshore) for back-level IBM software and product support – all at a fraction of IBM rates.
Our support starts at under $1,000 a month and scales with your size and needs.
TxMQ has been supporting IBM customers for over 35 years. We have teams of architects, programmers, engineers and others across the US and Canada supporting a variety of enterprise needs.
Case Studies
A medical-equipment manufacturer planned to migrate from unsupported versions of MQ and Message Broker. The migration would run 6 to 9 months past end-of-support, but the quote from IBM for premium support was well beyond budget.
The manufacturer reached out to TxMQ and we were able to offer a 6-month support package, which eventually ran 9 months in total. Total cost was under $1,000 a month.
Another customer (a large health-insurance payer) faced a similar situation. This customer was running WebSphere Process Server, Ilog, Process Manager, WAS, MQ, WSRR, Tivoli Monitoring, and outdated DataPower appliances. TxMQ built an executed a comprehensive “safety net” plan to support this customer’s entire stack during a very extensive migration period.
It’s never a good idea to run unsupported software – especially in these times of highly visible compliance and regulatory requirements.
In addition to specific middleware and application support, TxMQ can also work to build a compliance framework to ensure you’re operating within IBM’s license restrictions and requirements – especially if you’re running highly virtualized environments.
Get in touch today!
(Image from Torkild Retvedt)

Integrating The Energy Grid: Smart Grid Advantages And Challenges

Over the past few decades we’ve gotten really good at building secure, robust, scalable, smart computer networks. We’­­ve mastered quality-of-service to prioritize certain types of data and we understand rules-based routing. Not only can we track users by sign-on and IP addresses, but we can also validate authorizations for access against LDAPs. We’ve begun to look at enterprise architecture in a more holistic way, moving past the business silos so common in the 1980s and 1990s.
Then there’s our utility infrastructure.
U.S. utility grids are arguably the most advanced invention of the past century. Their technology and ingenuity surpass the smart phone, moon landing and even the Internet. What would negatively impact our lives more – losing power or water, or losing the Internet? (Let’s not include the 15- to 25-year-olds in this survey).
Yet, our nation’s utility grids remain stuck in the 1960s. Aging generation facilities, outdated power stations and substations, and above-ground utility lines are in a constant state of repair. We have limited visibility into real-time information, let alone predictive information. We always wait until there’s an outage or failure. There’s widespread realization that a smart grid is a competitive advantage we need as a nation, yet has resulted in few accepted standards. The destination is still so far away and, honestly, it sometimes seems like we have yet to even begin the journey.
Perhaps the most widely accepted route follows the path already taken in IP networking. Let’s not reinvent the wheel when designing our smart grid. We can leverage the advancements already in place by looking at the utility grid in the same way we look at enterprise technology infrastructure. If we use the same architectural standards and benchmarks – the same principals of governance and compliance – our journey to a smart grid nation could be halfway over.
The Benefits
Our electric grid has experienced shocks to the system never envisioned by the early power-generation pioneers. Power flowed one way, and only one way. Generation stations sent power to the distribution grid, which residences and businesses consumed. But today’s power hardly flows so simply. Now we have home solar arrays, electric cars and home energy storage systems. Our grid must now handle the two-way flow of power. Our grid must be smarter.
There’s a growing acceptance that we must manage this reality in careful ways, minimizing the location and quantity of these direct generation (DG) locations to 20% per zone. But even this isn’t a universally accepted axiom.
Utilities are widely accepting of smart grid benefits. The ability to see near-real-time data on demand and energy flow is net-new information. However, historically, the meter reader would report monthly, or every other month, on meter-read data. Not only was real-time inconceivable, but so was increasing the amount of data generated by cycle reads. This slow, archaic way of reading meters isn’t up to speed with today’s digital economy.
But smart grids are.
Today, smart meters are typically read up to four times an hour, or every 15 minutes. This turns a single data point from the 1980s into 3,000 data sets today! And that’s just per meter, per month! Where do we put this data, and what can we do with this information other than drowning in it? (In other words, how can we take a sip of water form a fire hose?)
Consumer Benefits
In addition how a smart grid benefits utilities, it provides countless benefits to us as consumers. First, a smart grid smoothes out the flow of power, nearly eliminating brownouts, blackouts and surges.
Second, consumers now have more control over their power usage. Home energy management systems (HEMS) can adjust usage so you can schedule your energy-intensive tasks, like laundry or charging a hybrid vehicle, during off-peak times. Energy is much cheaper when electricity is less in demand. On top of that, it also facilitates quicker troubleshooting when things do go wrong.
And of course, the benefits to the environment are well documented. Better energy management means less carbon emissions, cleaner waterways and less reliance on fossil fuels.
The Integration Challenge
In time, we will have a smarter grid. What remains today is an IT challenge ­– an integration challenge. Legacy systems can’t simply be taken offline just because there are better options out there. Advanced Metering Infrastructures (AMI) can’t simply be slapped on to an aging infrastructure. We need a smarter solution for integrating the disparate systems and technologies. We can propel yesterday’s analog, manual meter reads (ones with limited deployed technologies) to the smart systems now coming online.
It’s All About the Data
TxMQ has partnered with IBM to pioneer a Smart Grid Smart Business solution (SmartGridSB) for utility companies to better manage the growing flood of data generated today. Better dashboards, mobile connectivity and more actionable information at your fingertips gives today’s utility managers the power to deliver the results consumers insist on and public-utility commissions demand.
TxMQ’s smart consulting and staff augmentation solution teams are trained in complex integration challenges and strategic consulting. We can bridge the gap between AMI and smart grids by demonstrating complex ROI calculations for end-of-life equipment requiring newer alternatives.
Call or write today to learn how you can partner with TxMQ to deliver our SmartGridSB solution to your users and customers.
(Image from Oran Viriyincy)
 

America Needs An Education On Software Asset Management (SAM)

I recently had the privilege of attending (and co-sponsoring) the IBSMA SAM Summit in Chicago with some colleagues. It was a fantastic event with great sessions, a wonderful format and venue and amazing networking opportunities.   Representatives were in attendance from all of the major software vendors and many tool companies, alongside SAM consultancies like TxMQ.
What I noticed right away, though, was the skewed attendance. It’s wonderful seeing so many foreign firms travel thousands of miles to attend a conference in the US, but I’m really surprised by the lack of American and Canadian firms in attendance.
I have a theory I’ve been forwarding on why. Like many of my theories, this one’s based on a limited sampling of statistically insignificant data sets. So please give me a lump or two of salt for starters.
First, some contextual background: It’s clear to any informed American that we, as a nation, excel at many things. We eat well, spend well, vacation well, enjoy the finer things in life when we can afford them (and oftentimes when we cannot), and we love kicking problems down the road. Denial is more than an art form. It’s a social science.
Social Security reform? Not my problem – let future generations deal with it.   National debt? Please. My kids and grandkids can pay that off. The environment? Fossil-fuel consumption? Hardly seems to be an issue for my generation.
And US management is too often focused on putting out fires, instead of building fireproof things. So it shouldn’t have been a surprise to see so few American firms interested in understanding and investing in compliance improvement and best practices.
We must work to change the culture of America at a macro level, that much is clear. But we can all work today to change the culture of our workplaces to embrace SAM and declare it a must-do effort – not a future “nice to do if we get audited” thing.
Software Asset Management should NOT be undertaken as an audit-defense practice, but as a part of an overall corporate strategic leadership. Corporate best practice should be to have a tightly integrated leadership organization that includes a SAM leader alongside corporate-compliance officers, security officers and financial overseers.
From software-renewal-agreement negotiations to better alignment between software usage and needs, SAM brings tremendous goodness to organizations.
I’ve written separately on much of the value of SAM, as have many others, so I won’t get into a deep-dive here. But I will say again that a well-run company, with a solid SAM program, delivers greater value to its shareholders by:

  • Minimizing waste (like unused software and entitlements)
  • Maximizing efficiency (by limiting the wasted time replatforming out-of-compliance software or applications)
  • Creating a more positive environment for stakeholders (there’s less stress and worry because there’s less uncertainty and confusion around assets and their allocation or disposition)

Let’s all do our part to help educate our workplaces on SAM as a necessary part of corporate governance and leadership. I’m ready to start the conversation: mailto:chuck@txmq.com.

DataPower Firmware V7.2 Upgrade Details (More Cloud!)

One of the coolest things about IBM DataPower is how it provides a single security and integration gateway solution using a simple, policy-driven interface. And it’s ultra-flexible, so it’s easily deployed to multiple cloud environments. And that’s really the trick, isn’t it? The need to deploy across today’s growing mix of public, private and hybrid cloud mixes. It’s not as easy as it seems.
The good news is that the newest DataPower Gateway Firmware V7.2 upgrade extends multiple-cloud support with enhanced security and integration capabilities for mobile, API and web workloads. Want to run DataPower to gate Amazon Web Services? Great, this firmware update is definitely for you.
Cloud environments like Amazon AWS and IBM SoftLayer are hosting environments that allow a business to deploy an optimal set of supported infrastructure resources. Using DataPower, businesses can then quickly build gateway solutions and rely upon the cloud provider to maintain the underlying infrastructure. The result is an overall lower cost to deploy and secure workloads (especially mobile workloads), plus additional flexibility for faster responses to market trends and business opportunities.
The DataPower Firmware V7.2 enables this type of process improvement through several key upgrades:

  • Use GatewayScript to easily integrate Systems of Record with Systems of Engagement
  • Deploy IBM DataPower Gateways on Amazon Elastic Compute Cloud (EC2) and IBM SoftLayer CCI
  • Enhanced hybrid cloud integration: Use the Secure Gateway service to securely connect between IBM Bluemix applications and on-premise services secured by DataPower Gateways
  • Enhanced message-level security for mobile, API and web workloads with JSON web encryption for message confidentiality, JSON Signature for message integrity, plus JSON Web Token and JSON Web Key
  • Protect mission-critical applications with enhanced TLS protocol using Elliptic Curve Cryptography (ECC)
  • Build automation, deployment and migration scripts more quickly using the new Representational State Transfer (REST)-based management API
  • Enjoy the increased high availability with enhanced load-balancer group healthchecks
  • Support for IMS Commit mode 0 for more flexible integration with systems of record.
  • IBM WebSphereeXtreme Scale 8.6+ support for distributed caching

TxMQ specializes in DataPower upgrades and services. Need some quick advice about deploying this firmware upgrade into production? Want to decouple from core IT and speed your time to market? Want to integrate your systems of record with your new systems of engagement? That’s what we do. Let’s start a conversation today: Click the chat icon below, or email our president chuck@txmq.com.
(Image from Paul)

How IBM MQ v8 Powers Secure Cloud Integration

In this quickly growing digital economy, where we have an increasing demand on things like security, cloud and mobility, IBM MQ has been growing to meet those demands. To pick two of the three topics, MQ v8 can deliver secure cloud integration straight of the box.
It is important to know what type of cloud are we’re really talking about. Are you talking about moving all of your services into the cloud – even your virtual desktops? Or are you talking about a hybrid cloud where with a mix of cloud computing supplementing your own services? Or are you talking about a private cloud, where you’ll have segments of internal computing services totally isolated from general services. There are different considerations for each scenario.
Regardless of the type of cloud-computing services you’re using, you still need to integrate these services, and you really need to ensure that your integration has security, data integrity and the capability of sending messages once-only with assured delivery. Cloud can’t provide that. MQ can and does. And it does if out of the box with several recent enhancements to ensure secure integration.
With the digital economy, we’re all sharing all this data, including personal data, banking and health data. We need to keep this data secure when it’s being shared, and control who has access to it. Then of course there’s the large compliance piece we need to meet. How does MQ meet all these demands? The answer is authentication, and MQ’s solution is still the same as being asked for ID of proof at the post office when you go to pick up a package. MQ v8 has been enhanced to support full user authentication right out of the box. No more custom exits and plugins.
For distributed platforms, you have local OS authentication, or you can actually go to centralized data. For z/OS you’re still focused on local authentication.
And this next point is important: MQ for quite some time has supported certificate authentication of applications connected to MQ services. But this always meant that the public MQ key had to be shared with everyone. MQ now has been enhanced to support the use of multiple certificates for authentication, securing of connections and encryption using separate key pairs. MQ still support SSL and TLS, although there are strong recommendations for switching from SSL to TLS based on the POODLE vulnerability.
(Image from mrdorkesq)