White Paper: E-Commerce Trends

TxMQ Staff Contribution

Project Description: E-Commerce Trends

This paper will help you better understand how your consumers are interacting with E-Commerce and what they expect from your company’s website. It will also teach you what your company needs to do to keep up with emerging E-Commerce trends.

E-Commerce Is Exploding

In the past 10 years, e-commerce has taken on a life of it’s own. You may say, “Well, I bought things online 10 years ago, too.” That’s true, you did. But the online purchasing process is now fundamentally more evolved.
Gone are the days when a website purely offered a good or service which was simply directed to a shopping cart upon checkout, paid for and shipped.
In 2012 the best e-commerce sites will have teams of people to assess pertinent data about their consumers. This data will enhance digital marketing efforts, further supporting what has always been an integral component of a company’s overall strategy. The byproduct of the information collected as a result of online shopping creates a truer insight into important consumer behaviors, footprints, demographics and patterns.

How better to approach marketing than by understanding your audience’s needs?

The best E-Commerce platforms, (i.e., IBM® WebSphere Commerce) allow companies to collect information and record customers’ transactions, preferences and shopping habits.
The importance of this is two-fold. Not only can companies focus their online efforts toward better meeting customer expectations, they can also market across mobile and offline channels as well.
If you haven’t focused any energy on your e-commerce system, we suggest that now is the time to do it. Don’t get left behind as your competitors’ forge ahead using the newest technologies and leveraging the consumer awareness they will gain from upgraded systems and integrated analytics.
According to analyst Brian Walker of Forrester Research, Inc., experts believe that half of all retail transactions will either take place online or be inspired by something consumers see on the web.
Emerging trends show that the website is no longer the main e-commerce touch point. Customer interaction will now include mobile phones, tablets, Internet-enabled store checkout systems and more.
Your company needs to be prepared to manage multi-channel selling and fulfillment.

E-Commerce Trends

There’s constant change happening all around you. It’s your job to keep up with it all. Here are some emerging e-commerce trends to keep your eye on as we dive into the second-half of 2014.

Going Mobile

2014 can be one of the biggest breakout years yet in terms of companies going mobile. The demand for mobile developers is on the rise as more companies begin to develop mobile applications for their services.
RECCOMMENDATION: Don’t just build a mobile app. Instead utilize HTML 5 to ensure your customers can view your site no matter what device they’re working with. HTML 5 is dynamic and it creates the opportunity to operate from a single, catch-all platform without worrying about being device-specific.

Simple Navigation

Make sure your customer can easily search your website and find the products they are looking for. People don’t navigate the way they used to, and since time is precious, one click transfers to a shopping cart will be imperative. You don’t want to lose a customer in the hassle and steps of a long transaction.

Cross Channel Experiences

E-Commerce must be accessible in any form your customer wants it. While e-commerce does play a huge role in customer experience, it is complemented by retail, mobile and social experiences as well.

Availability of Digital Goods

Brick-and-mortar companies are hurting. With options such as site-to-store shipping, goods are more readily available to consumers.
In addition, many traditionally brick-and-mortar shops will begin to offer goods online or even bring the e-commerce experience in-store via terminal transactions.

Video Vs. Images

According to Warren Knight of business2community.com, “2012 will be the year of video commerce.” This year we will begin to see video demos showing real users actually using or modeling a product in motion. One click on the featured product will send consumers right to the final purchase destination.

What Do Consumers Really Want?

If you’re running an E-Commerce business, what is it that your customers really want? Discovering what will make them truly happy is half the battle toward building brand awareness and loyalty.
The other half of the battle is fulfillment. Deliver on your promises, as promised and you will have customers returning again and again to purchase your products.

Pricing

Discounts. Discounts. Discounts. Have you noticed the recent trend toward discounting an item or product when a certain number have been purchased?
Customers want to feel like they’re getting a bargain for a quality item. And with the boom of social media outlets like Facebook, Twitter or Pintrest, where consumers can act as a driver of merchant sales by sharing deals with friends, group discounts are easily promoted and taken advantage of.
One Stop Shopping
Give your consumers a quick and easy exit to their shopping cart and purchase completion. Nobody wants to have to fill out forms and go through several steps to complete an order.
Many sites are allowing consumers to check out as guests and providing them the opportunity to register their accounts at a later time. Keep it quick and easy and your customers will continue to return.

Security

Now is a better time than ever to make sure your website is up to date on security issues. With the increase in apps, cloud platforms and social sharing sites, security threats are an every day reality.
Investing in online security is one of the best ways to invest in your company’s future.

Consistency

We touched on this a bit in the fulfillment section above. Consumers should be able to get efficient, consistent customer service. That means orders are shipped on time and the quality of the product is always top notch.
This is what customer loyalty and brand referral is built upon. Once a customer returns to your site to purchase your product more than once, you’ve created loyalty. Now it’s up to you to hold up your end and grow that trust.

What Can Your Company Do?

More and more, companies are utilizing the information they obtain from online transactions to create full-scale marketing campaigns.
Don’t work against yourself. Investing in a flexible, reliable system will allow you to serve customers across all channels, improve the way your company manages its’ products and information and create a deep understanding of your customers’ needs.
The change in your company needs to start at the top. If your company’s CEO isn’t behind the transition, you will struggle. Your entire company needs to see the benefits of the transformation.
Many times, your company will need to adjust the way it operates from the ground up. That may mean re-training your employees, or changing the way you incent them to meet customers’ expectations.
Photo Credit to Espos

Webinar: IBM DataPower Saves Healthcare Payer $500k Annually

[et_pb_section bb_built=”1″][et_pb_row][et_pb_column type=”4_4″][et_pb_text admin_label=”Text – Title and Blurb” _builder_version=”3.0.98″ background_layout=”light”]

Webinar: IBM DataPower Saves Healthcare Payer 500k Annually

This informative webinar on IBM DataPower, presented by TxMQ’s Bob Becktell, details how IBM’s DataPower XB62 is helping Medical Mutual of Ohio realize $500k ROI. Bob was one of TxMQ’s lead consultants on site at MMO and led the project from conception through the finish.

  • Setting Up Trading Partner Gateway
  • Developing B2B Infrastructure Solution
  • IBM DataPower XB62 Appliance
  • HIPAA Compliance
  • Secure Data Exchange

The webinar can be viewed in the player below, or contact us to download an MP4 version to your drive or mobile device.
[/et_pb_text][/et_pb_column][/et_pb_row][et_pb_row _builder_version=”3.0.98″ module_alignment=”center” use_custom_width=”on” width_unit=”off” custom_width_percent=”85%”][et_pb_column type=”4_4″][et_pb_code admin_label=”Code – Webinar Video” _builder_version=”3.0.98″]<center><iframe src=”https://www.youtube.com/embed/x7ogNkZSHRw” width=”640″ height=”480″ frameborder=”0″ allowfullscreen=”allowfullscreen”></iframe></center>[/et_pb_code][/et_pb_column][/et_pb_row][/et_pb_section]

White Paper: Why Upgrade from WebSphere Application Server (WAS) v7 to v8.x?

One of the more common questions we field at TxMQ comes from the enterprise community. Customers ask: We already upgraded our WebSphere Application Server (WAS) from 6 to 7, why should we now upgrade from 7 to 8? With the amount of chatter surrounding this topic, there’s clearly a bit of disconnect, so here’s some insight to help in the decision-making process.
There are several compelling reasons to upgrade from WAS v7 to v8, and they center on performance and setup/configuration improvements. The performance gains help you maximize your hardware investments because you won’t outgrow your servers as quickly. That ultimately leads to a reduction in your Total Cost of Ownership (TCO).
The setup/configuration improvements will speed up your end-to-end development cycle. You’ll therefore enable better, faster development using the same resources.
Lastly, the mobile-application feature pack offered in WAS v8 is a big advantage for companies already involved with, or wanting to become involved with mobile-app development and operation. This feature pack helps immensely.
That’s the broad-stroke look at what a WAS v8 upgrade delivers. Following is a more granular look at the specific features and benefits of a WAS v8 upgrade, including features exclusive to the latest v8.5 update.
APPLICATION SERVER PERFORMANCE
An upgrade from WAS v7 to v8.x delivers:
• Application performance improvements of up to 20%
• Up to 20% faster server startup time for developers
• Up to 28% faster application deployments in a large topology
• JPA 2.0 optimizations with DynaCache and JPA Level 2 cache
APPLICATION SERVER SETUP
The new Liberty Profile option is a highly composable, fast-to-start and ultra-lightweight profile of the application server and is optimized for developer productivity and web-application deployment.
• Up to 15% faster product installations
• Up to 323% faster application-server creation in a large topology
• Up to 45% faster application-server cluster creation in a large topology
• Up to 11% better vertical scaling on larger multicore systems
JAVA 6
WAS v8 includes JVM runtime enhancements and JIT optimizations. It lowers risks through end-to-end security-hardening enhancements including security updates defined in the Java EE 6 specifications, additional security features enabled by default and improved security-configuration reporting.
SECURITY
Security default behavior is enhanced: SSL communication for RMI/IIOP, protected contents of HTTP session objects, Cookie protection via HttpOnly attribute is enabled.
• Java Servlet 3.0 security now provides three methods – login(), logout() and authenticate() – that can be used with an HTTPServletRequest object and the ability to declare security constraints via annotations
• Basic security support for the EJB embeddable container
• Support for Java Authentication SPI for containers (JASPI)
• Web Services Security API (WSS API) and WS-Trust support in JAX-WS to enable users building single sign on Web services-based applications
• Security enhancement for JAX-RS 1.1
STANDARDS & APIs
The focus on simplification continues in EJB 3.1 with several significant new functions including optional Business Interfaces, Singleton EJBs and Asynchronous Session Bean method invocation.
• CDI 1.0 – New API to support Context and Dependency Injection
• Bean Validation 1.0 – New API for validating POJO Beans
• JSF 2.0 – Adds Facelets as a view technology targeted at JSF
• Java Servlet 3.0 – Makes extensive use of annotations, introduces web fragments and a new asynchronous protocol for long-running requests
• JPA 2.0 – Has improved mapping support to handle embedded collections and ordered lists, and adds the Criteria API
• JAX-RS 1.1 – Delivers Web 2.0 programming model support within Java EE
• JAX-WS 2.2 – Extends the functionality provided by the JAX-WS 2.1 specification with new capabilities. The most significant new capability is support for the Web Services Addressing (WS-Addressing) Metadata specification in the API
• JSR-109 1.3 – Adds support for singleton session beans as endpoints, as well as for CDI in JAX-WS handlers and endpoints, and for global, application and module-naming contexts
• JAXB 2.2 – Offers improved performance through marshalling optimizations enabled by default
NEW & ENHANCED FEATURES
The listings here are important and many.
• The Web 2.0 feature pack – new revenue opportunities and rich user experiences enabled by extending enterprise applications to mobile devices
• Faster migrations with less risk of downtime through improved automation and tools, including a no-charge Migration Toolkit for migrating version-to-version and from competition
• Improved developer and administrator productivity through new and improved features such as improved developer productivity via monitored directory install, uninstall, and update of Java EE applications to accelerate the edit-compile-debug development lifecycle
• Enhanced administrator productivity through automated cloning of nodes within clusters, simpler ability to centrally expand topologies and to locate and edit configuration properties
• Faster problem determination through new binary log and trace framework
• Simpler and faster product install & maintenance with new automated prerequisite and interdependency checking across distributed and z/OS environments
• Deliver better, faster user experiences by aligning programming model strengths with project needs through WebSphere’s leadership in the breadth of programming models supported: Java EE, OSGi Applications, Web 2.0 & Mobile, Java Batch, XML, SCA (Service Component Architecture), CEA (Communications Enabled Apps), SIP (Session Initiation Protocol) & Dynamic Scripting
• Integration of WebSphere Application Server v7 feature packs to simplify access to new programming models
• Deliver single sign-on (SSO) applications faster through new and improved support for SAML, WS Trust and WSS API specifications
• Most v7 feature packs are integrated into v8 OSGi Applications, Service Component Architecture (SCA), a Java Batch Container, Communications Enabled Applications (CEA) programming model, and XML programming model improvements XSLT 2.0, XPath 2.0 and xQuery 1.0
• Automatic Node Recovery and Restart
ENHANCEMENTS IN v8.5
The following enhancements are specific to WAS v8.5.
• Application Edition Management enables interruption-free application rollout. Applications can be upgraded without incurring outages to your end-users
• Health Management monitors the status of your application servers and is able to sense and respond to problem areas before end-users suffer an outage. Problem areas include increased time spent on garbage collection, excessive request timeouts, excessive response time, excessive memory and much more
• Intelligent Routing improves business results by ensuring priority is given to business-critical applications. Requests are prioritized and routed based upon administrator-defined rules
• Dynamic Clustering can dynamically provision and start or stop new instances of application server Java Virtual Machines (JVM) based on workload demands. It provides the ability to meet Service Level Agreements when multiple applications compete for resources
• Enterprise Batch Workload support leverages your existing Java online transaction processing (OLTP) infrastructure to support new Java batch workloads. Java batch applications can be executed across multiple Java Enterprise Edition ( Java EE) environments
• IBM WebSphere SDK Java Technology Edition v7.0 as an optional pluggable Java Development Kit (JDK)
• Web 2.0 and Mobile Toolkit provides an evolution of the previous feature pack
• Updated WebSphere Tools bundles provide the “right-fit” tools environment to meet varied development and application needs
Are you evaluating a WAS upgrade? TxMQ can help. To get started, contact vice president Miles Roty at (716) 636-0070 x228, miles@txmq.com.
Photo courtesy of Lynn

Discussion: The Four Categories Of Technology Decisions and Strategy

Authored by Chuck Fried, President – TxMQ, Inc.

Project Description

There are countless challenges facing IT decision-makers today. Loosely, we can break them down into four categories: Cost control, revenue generation, security and compliance, and new initiatives.
There’s obvious overlap between these categories: Can’t cost control be closely related to security considerations when evaluating how much to invest in the latter? Aren’t new initiatives evaluated based on their ability to drive revenues, or alternatively to control costs? Yet these categories will at least allow us to focus and organize the discussion below.

Cost Control

Cost control has long been a leading driver in IT-spend decisions. From the early days of IT in the ’50s and ’60s, technology typically had a corporate champion who argued that failure a to invest in tomorrow would yield to customer attrition and market-share erosion. A loss of competitive advantage and other fear-mongering were common arguments used to steer leadership toward investments in something the leadership couldn’t fully grasp. Think about the recent arc on AMC TV’s Mad Men featuring an early investment in an IBM mainframe in the 1960s ad agency – championed by few, understood by even fewer.

From these early days, IT often became a black box in which leadership saw a growing line of cost, with little understanding attached to what was happening inside that box, and no map of how to control those growing expenditures.

Eventually, these costs became baked-in. Companies had to contend with the reality that IT investment was a necessary, if little-understood reality. Baseline budgets were set, and each new request from IT needed to bring with it a strong ROI model and cost justification.

In more recent years – and in a somewhat misdirected attempt to control these costs – we witnessed the new option to outsource and offshore technology. It was all an attempt to reduce baked-in IT costs by hiring cheaper labor (offshoring), or wrapping the IT costs into line-item work efforts managed by third parties (outsourcing). Both these efforts continue to meet with somewhat mixed reviews. The jury, in short, remains out.

IT costs come from five primary sources – hardware, software, support (maintenance and renewals), consulting and people. Note that a new line-item might be called “cloud,” but let’s keep that rolled into hardware costs for now. Whether cloud options change the investment equation or not isn’t the point of this paper. For now, we’ll treat them as just another way to acquire technology. There’s little, but growing evidence that cloud solutions work as a cost reducer. Instead, they more often alter the cash-flow equation.

Hardware costs are the most readily understood and encompass the servers, desktops, printers – the physical assets that run the technology.

Software, also relatively well understood, represents the systems and applications that the company has purchased (or invested in if homegrown) that run the business, and run on the hardware.
Support refers to the dollars charged by software and/or hardware companies to continue to run licensed software and systems.

Consulting means dollars invested in outside companies to assist in both the running of the systems, as well as in strategic guidance around technology.

People, of course, is the staff tasked with running and maintaining the company’s technology. And it’s the people costs that are perhaps the most sensitive. While a company might want to downsize, no one wants the publicity of massive layoffs to besmirch a brand or name. Yet a reality of a well-built, well-designed and well-managed IT infrastructure should be a reduction in the headcount required to run those systems. If a company doesn’t see this trend in place, consider it a red flag worthy of study.

How should a company control IT costs? Unless a company is starting from scratch, there’s going to be some level of fixed infrastructure in place, as well as a defined skills map of in-place personnel. A company with a heavy reliance on Microsoft systems and technology, with a well-staffed IT department of Microsoft-centric skills, should think long and hard before bringing in systems that require a lot of Oracle or open-source capabilities.

Companies must understand their current realities in order to make decisions about technology investment, as well as cost control. If a company relies on homegrown applications, it can’t readily fire all of its developers. If a company relies mostly on purchased applications, it might be able to run a leaner in-house development group.

Similarly, companies can’t operate in a vacuum, nor can IT departments. Employees must be encouraged to, and be challenged to attain certifications, attend conferences and network with peers outside of the company. Someone, somewhere already solved the problem you’re facing today. Don’t reinvent the wheel. Education is NOT an area we’d encourage companies to cut back on.

At the same time, consulting with the right partner can produce dramatic, measurable results. In the 21st century, it’s rather easy to vet consulting companies – to pre-determine capability and suitability of fit both skills-wise, as well as culturally. The right consulting partner can solve problems quicker than in-house teams, and also present an objective outsider’s view of what you’re doing right, and what, perhaps, you could do better. “We’ve always done it this way” isn’t an appropriate reason to do anything. A consulting company can ask questions oftentimes deemed too sensitive even for leadership. I’ve been brought in to several consulting situations where leadership already knew what decision would be made, but needed an outside party to make it and present the option to leadership so things would appear less politically motivated.

Similarly, consulting companies can offer opinions on best practices and best-product options when required. A software company, on the other hand, won’t recommend a competitor’s product. A consulting company that represents many software lines, or even none at all, can make a far more objective recommendation.

In point of fact: One of the primary roles for an experienced consulting company is to be the outside partner that makes the painful, outspoken recommendations leadership knows are necessary, but can’t effectively broadcast or argue for within the boardroom.
Innovation As Cost Control

There are several areas of technology spend with legitimately demonstrated ROI. Among these are BPM, SOA, process improvement (through various methodologies and approaches from six sigma to Agile development among many others), as well as some cloud offerings previously touched on.
BPM, or business-process management, broadly describes a category of software whereby a company’s processes, or even lines of business can be automated to reduce complexity, introduce uniformity and consistency of delivery, and yes, reduce the needed headcount to support the process.

We’re not talking about pure automation. Loosely, BPM refers to a system that cannot be fully automated, but can be partially automated yet still require some human interaction or involvement. Think loan application or insurance claim. Many of the steps can be automated, yet a person still has to be involved. At least for now, a person must appraise a damaged car or give final loan approval.

SOA, discussed elsewhere in this paper, means taking advantage of a loosely coupled infrastructure to allow a more nimble response to business realities, and great cost controls around application development and spend. Process improvement, of course, means becoming leaner, and better aligning business process with business realities, as well as ensuring greater consistency of delivery and less stress on internal staff due to poor process control.

Big Data As Cost Control

Big data, discussed later in this paper, can also be used, or some might say misused, to drive down costs. An insurance company can use data analysis to determine who might be too costly to be covered any longer. A bank might also feel certain neighborhoods or even cities are too risky to offer loans. There’s a dark side to big data. At times even unintended consequences may result. It’s important for human oversight to remain closely aligned to system-generated decisions in these nascent days of big data. While Google may have figured out how to automate the driving process, companies today are still hopefully more than a few years away from 100% automated decisioning. One hopes society, as much as government oversight will help ensure this remains the case for the foreseeable future.

Integration And Service Oriented Architecture As Cost Control

Too often companies rely on disparate systems with limited, if any ability to interact. How often have you been logged on to your bank’s online system, only to have to log on a second time to access other accounts at the same institution?

Companies must be quick to recognize that consumers today expect seamless, complete integration at all points of their interaction. Similarly, suppliers and trading partners are ever more expectant of smooth onboarding and ease-of-business transacting. Faxing purchase orders is yesterday. Real-time tracking and reordering of dwindling products in the supply chain is the new normal.

At the same time, companies must recognize that to be nimble and adaptable against the ever-changing reality of their businesses, applications must be designed and implemented differently. It’s one thing if a company has five different systems and no more. A simple 1980s-era point-to-point architecture might suffice. Yet other than that limited example, enterprises show an ever-changing array of systems and needs. Hardcoding applications to each other yields an inflexible and challenging infrastructure that’s cost-prohibitive to change.

Yet other than that limited example, enterprises show an ever-changing array of systems and needs. Hardcoding applications to each other yields an inflexible, and challenging infrastructure that’s cost prohibitive to change.

An Enterprise Service Bus, or ESB, is what most enterprises and midmarket companies today recognize as a model architecture. Decoupling applications and integrating them loosely and asynchronously enables rapid application design and more nimble business decision-making. At the same time, an SOA-enabled environment also allows for the rapid adoption of new products and services, as well as the rapid rollout of new offerings to customers and trading partners. Yes, this does require a changing of attitudes at the development, QA and production-support levels, but it’s a small price to pay for long-term corporate success.

Revenue Generation

The idea of IT helping companies generate new revenue streams isn’t new. It’s as old as the technology itself. Technology has always played a role in driving revenue – from airline-reservation systems, to point-of-sale systems, to automated teller machines (allowing banks to charge fees). Yet these systems didn’t create new revenues (with the possible exception of the ATM example) – they simply automated previously existing systems and processes.

Airlines were booking flights before the Internet made this possible for the mass public, and restaurants have, of course, been serving people dating back to the dawn of civilization. Technology improved these processes – it didn’t created them.

But until recently there were no app stores and certainly no online videogame communities. Not to mention the creation of entire companies whose revenue streams weren’t even theoretically possible years ago – think Netflix or Square or Uber. So new revenue streams are as likely to be the sole source of revenue for a company as they are to supplement an already in-place business model.
Perhaps one of the more exciting new-revenue developments has come to be called the API Economy. An API, or Application Programming Interface, is a bit of code or rules for how a software component should interact with other software or components. As an example, Salesforce.com has a published API to define how another application can integrate with it.

This new ecosystem is based on what grew out of web-browser cookies years ago. Cookies are bits of data on individuals’ personal systems used to identify returning visitors to websites. Today, the API economy describes how and why our browsers look the way they do, why we see which ads where, and much more. A visit today to most websites involves that site looking at your profile and comparing it to data that particular company has on you from other APIs it might own or subscribe to. Facebook, Google, Amazon and others allow companies to target-market to the visitor based on preferences, other purchases and the like. Similarly, one can opt in to some of these capabilities to allow far greater personalization of experiences. A user might opt in to a feed from Starbucks, as well as a mall he or she frequents, so upon entry to the mall, Starbucks sends a coupon if the guest doesn’t come in the store on that particular visit.

Big Data As Revenue Generator

Taking a bit of the API example from above a step further, companies can combine data from multiple sources to run even more targeted research or marketing campaigns. Simply put, Big Data today leverages incredible computing power, with the ever-increasing amount of publicly (and privately when allowed by individuals) data to drive certain outcomes, or uncover previously unknown trends and information.

There are a variety of dramatic examples recently cited in this space. Among them, a realization that by combining information on insurance claims, water usage, vacancy rates, ownership and more, some municipalities have been able to predict what properties are more likely than others to experience failure due to theft or fire. What makes this possible is technology’s nearly unimaginable ability to crunch inconceivably large data sets that no longer have to live, or even be moved, to a common system or platform.

In the past, to study data one had to build a repository or data warehouse to store the data to be studied. One had to use tools (typically ETL) and convoluted twists and orchestrations of steps to get data into one place to be studied. At the same time, one had to look at subsets of data. One couldn’t study ALL the data – it just wasn’t an option. Now it is.

In another dramatic and oft-cited example, it was discovered that by looking at the pattern of Google searches on “cold and flu remedies,” Google was better able than even the CDC to predict the pattern of influenza outbreaks. Google had access to ALL the data. The data set equaled ALL. The CDC only had reports, or a subset of the data to look at. The larger the data set, oftentimes the more unexpected the result. It’s counter-intuitive, but true.

How and where revenue models work within this space is evolving, but anyone who’s noticed more and more digital and mobile ads that are more and more appropriate to their likes (public or private) is experiencing Big Data at work, combined with the API Economy.

Mobile As A Revenue Generator

Mobile First. It’s a mantra we hear today that simply means companies must realize that their customers and employees are increasingly likely to access the company’s systems using a mobile platform first, ahead of a desktop or laptop device. It used to be an afterthought, but today a company must think first about deploying its applications to support the mobile-first reality. But how can mobile become a unique revenue driver?

Outside of apps in the Android and Apple App stores, companies struggle with use cases for mobile platforms. This author has seen the implementation of many creative use cases, and there are countless others yet to be imagined. Many retailers have followed Apple’s model of moving away from fixed checkout stations to enable more of their employees to check people out from a mobile device anywhere in the store. Similarly, many retailers have moved to tablets to enable their floor employees to remain engaged with consumers and to show the customers some catalog items perhaps not on display. The mobile platform can even be used to communicate with a backroom clerk to bring out a shoe to try on while remaining with the guest. It’s a great way to reduce the risk of an early departure. A strong search application might also allow the retailer to show the customer what other guests who looked at the item in question also looked at (think Amazon’s famous “others who bought this also bought these” feature). We’ll discuss mobile further later in this paper.

Security and Compliance

Not enough can be said about security. All vendors have introduced products and solutions with robust new enhancements in security – from perimeter hardening to intrusion detection. Yet security begins at home. Companies must begin with a complete assessment of their unique needs, capabilities, strengths and weaknesses. There’s no such thing as too much security. A bank might know it can require fingerprint verification, photo id and more at the point of sale prior to opening a new account. Yet that same bank might also acknowledge that too high a barrier might turn off the very customers it needs to attract. It’s a delicate tightrope.

All experts agree that the primary security threats are internal. More theft happens from within an organization than from without. Proper procedures and controls are critical steps all enterprises must take.

The following areas we can loosely discuss under the Security and Compliance umbrella. While they may or may not all tightly fit herein, it is the most useful group to place them in for our purposes.

Mobile Device Diversity And Management

Mobile’s everywhere. Who among us doesn’t have at least one, if not many mobile devices always within arm’s reach at all times, often at all hours of the day? It’s often the first thing we check when we rise in the morning and the last thing we touch as we lay down at night. How can companies today hope to manage the desires of their employees to bring their own devices to work and plug into internal systems? Then there’s the concern about choosing what platforms can and should be supported to allow for required customer- and trading-partner interaction, not to mention securing and managing these devices.

A thorough study of this topic would require volumes and far greater study than is intended here. Yet a comprehensive device strategy is requisite today for any midmarket to enterprise company, and a failure to recognize that fact will lead only to disaster. A laptop left in a cab or a cell phone forgotten in a restaurant can lead to a very costly data breach if the device isn’t both locked down, as well as enabled with a remote-management capability to allow immediate data erasure.
As to enabling customer and partner access: As stated earlier, companies are made up of people, and people are more likely to access a system today from a mobile device than from a desktop. It’s crucial that companies engineer for this reality. A robust mobile application isn’t enough. Customers demand a seamless and completely integrated frontend to backend system for mobile to mirror a desktop-quality experience.

Cloud

So what is a cloud, and why is everyone talking about it? As most realize now, cloud is a loose term meant to refer to any application or system NOT hosted, run or managed in the more traditional on-premises way. In the past, a mainframe ran an application accessed by dedicated terminals. This evolved to smaller (yet still large) systems also with dedicated “slaved” terminals, and later to client-server architecture where the client could run its own single-user systems (think MS Office Suite) as well as access back-end applications.

Cloud applications are easiest thought of as entire systems that are simply run elsewhere. Apple’s App Store, Salesforce.com, online banking and many examples too numerous to mention are typical. Companies today can also choose to have their traditional systems hosted and run, and/or managed in the cloud, but can also architect on-premises clouds, or even hybrid on- and off-premises clouds.

The advantages are numerous, but the pitfalls are too. Leveraging someone else’s expertise to manage systems can produce great returns. Yet companies often fail to properly negotiate quality-of-services and SLAs appropriate to their unique needs, and complain of poor response time, occasional outages and the like. We can’t even begin to address the security implications about Cloud, including what customer data, or patient data can, and cannot be stored off premises.
There are countless consulting firms that can guide companies needing help to navigate the messy landscape of cloud options and providers.

New Initiatives:

Internet Of Everything (IOT)

A web-enabled refrigerator? A thermostat that can automatically set itself? A garage door that can send an email indicating it was left open? GPS in cars, watches and phones? These are just a few of the ideas we once thought silly, or simply couldn’t conceive of yet. But they’re a reality today. Right now we’re like the 1950s science-fiction filmmaker: We’re unable to foresee the depth and degree to which this will all end up, and what we can comprehend will probably seem silly and simple a decade from now.

CIOs and IT decision-makers today are faced with a never-ending list of things to worry about and think about. Today the IOT is a complex maze of what-ifs where reality seems to grow murkier, not than clearer at times. What is clear, though, is that just because something can be done doesn’t mean it should be done. Many of these interesting innovations will lead to evolutionary deadends as surely as the world moved away from VCRs and Walkman cassette players. Tomorrow’s technology will get here, and the path will likely be as interesting as the one we’ve all followed to make it this far.

CIOs must continue to educate and re-educate themselves on technology and consumer trends. This author found himself at a technology conference recently with executives from several global technology companies. Also in attendance was the author’s 17-year-old son, who quickly found himself surrounded with IT leaders asking his opinion on everything from mobile-device preferences to purchasing decisions in malls. When the dust settled, the teenager turned to the author and said rather decisively: “They all have it mostly wrong”.

Cloud, discussed earlier, as well as new mobile offerings can also be fit under this category of discussion.

Personnel And Staffing

More has likely been written on the topic of personnel and staffing in recent years than on all the above topics combined, and thus we left it as a unique item for final thought. Companies struggle with identifying talent to fill all of their vacant positions. At the same time, the populace complains that there aren’t enough jobs. Where’s the disconnect?
The United States and most democratized nations, are at a societal inflection point. As we move beyond the industrial revolution into this new Internet or Information Age, there will be pain.
Just as the United States struggled to adjust to the move from an agrarian society to an industrial one, we now struggle yet again.

We must be careful to not make pendulum-like moves without a careful study of the consequences – both intended and otherwise. It’s very difficult to change the momentum of a pendulum, and some policy decisions are difficult to undo.

There is an income gap. No economist would argue against that point. Wealth continues to accumulate at the top and leave a growing gap between the haves and have-nots. Some of this is expected and will settle out over time, yet most requires broad-based policy decisions on the part of lawmakers and corporate leaders alike.

Education reform and immigration reform are the tips of the iceberg we must begin to tackle. Yet we must also tackle corporate resistance to a more mobile workforce. Too often this author hears of a critical technology position that remains vacant because the employer refuses to hire a remote worker who could very readily do the job, and perhaps at a lesser cost attributed to their willingness to work for less from their home.

While technology leaders like IBM, Oracle, HP, Microsoft and others have quickly moved to adopt this paradigm, corporate America has moved rather more slowly.
Wherever we arrive at in the future, it will be the result of decisions made today by leaders facing a rather unique set of challenges. Yet never before have we had access to the amount of information we have today to make those decisions.

With careful thought, and proper insight, the future looks rather exciting.

Case Study: Medical Mutual Reduces Fees By $500K Through Real-Time Processing, Security

by Natalie Miller | @natalieatWIS for IBM Insight Magazine

Project Description

Ohio healthcare provider Medical Mutual wanted to take on more trading partners and more easily align with government protocols, but was without the proper robust and secure infrastructure needed to support the company’s operations.“We needed to set up trading partner software and a B2B infrastructure so we could move the data inside and outside the company,” says Eleanor Danser, EDI Manager, Medical Mutual of Ohio. “The parts that we were missing were the trading partner software and the communications piece to support all the real-time protocols that are required from the ACA, which is the Affordable Care Act.
”Medical Mutual already had IBM WebSphere MQ and IBM WebSphere Message Broker, as well as IBM WebSphere Transformation Extender (TX) in its arsenal to move the company’s hundreds of daily file transfer protocol (FTP) transactions. Healthcare providers are constantly moving data and setting up connections between different industry sectors—efforts that involve securing information from providers and employers who then send out to clearinghouses and providers.
“It’s constantly moving data back and forth between different entities—from claims data, membership data, eligibility and benefit information, claims status—all the transactions that the healthcare industry uses today,” says Danser.
However, as the healthcare industry evolves, so does its need for streamlined and easy communication. Medical Mutual also realized that their current infrastructure didn’t provide the company with the necessary authentication and security. It needed a Partner Gateway solution with batch and real-time processing that could match or exceed the 20-second response window in order to stay HIPAA compliant.
Medical Mutual sought a solution to aid with the communications piece of the transaction, or the “the handshake of the data,” explains Danser. “You must build thorough and robust security and protocols built the authentication of a trading partner to be able to sign in and drop data off to our systems, or for us to be able to drop data off to their systems …. It’s the authentication and security of the process that must take place in order to move the data.”
Without the proper in-house expertise for such a project, Medical Mutual called upon TxMQ, an IBM Premier business partner and provider of systems integration, implementation, consultation and training.

Choosing a solution and assembling a team

Since Medical Mutual already had an existing infrastructure in place using IBM software, choosing an IBM solution for the missing trading partner software and the communication piece was a practical decision.
We went out and looked at various vendor options. If we went outside of IBM we would have had to change certain parts of our infrastructure, which we really didn’t want to do. So this solution allowed us to use our existing infrastructure and simply build around it and enhance it. It was very cost effective to do that..
“We went out and looked at various vendor options,” explains Danser. “If we went outside of IBM we would have had to change certain parts of our infrastructure, which we really didn’t want to do. So this solution allowed us to use our existing infrastructure and simply build around it and enhance it. It was very cost effective to do that.”
In December 2012, Danser and her team received approval to move forward with IBM WebSphere DataPower B2B Appliance XB62 — a solution widely used in the healthcare industry with the built-in trading partner setup and configurations Medical Mutual wanted to implement.

TxMQ’s experience and connections set Medical Mutual up for success

The project kicked off in early 2013 with the help of four experts from TxMQ. The TxMQ team of four worked alongside Danser’s team of four fulltime staff members from project start through the September 2013 launch of the system.
“[TxMQ] possessed the expertise we needed to support what we were trying to do,” says Danser of the TxMQ team, which consisted of an IBM WebSphere DataPower project manager, an IBM WebSphere DataPower expert, an IBM WebSphere TX translator expert, and an IBM WebSphere Message Broker expert. “They helped us with the design of the infrastructure and the layout of the project. “
The design process wrapped up in April 2013, after which implementation began. According to Danser, the TxMQ project manager was on-site in the Ohio office once a week for the first few months. The Message Broker expert was onsite for almost four months. Some of the experts, for IBM WebSphere DataPower as one example, had weekly meetings from an offsite location.

Overcoming Implementation Challenges

TxMQ stayed on until the project went live in September 2013— two-and-a-half months past Danser’s original delivery date estimate. The biggest challenge that contributed to the delay was Medical Mutual’s limited experience with the technology, which required cross-training.
“We didn’t have any expertise in-house,” explains Danser, adding that the IBM WebSphere DataPower systems and the MQFTE were the steepest parts of the learning curve. “We relied a lot on the consultants to fill that gap for us until we were up to speed. We did bring in some of the MQ training from outside, but primarily it was learning on the job, so that slowed us down quite a bit. We knew how our old infrastructure worked and this was completely different.”
Another issue that contributed to delay was the need to search and identify system-platform ownership. “Laying out ownership of the pieces … took a while, given the resources and time required,” explains Danser. “It involved trying to lay out how the new infrastructure should work and then putting the processes we had in place into that new infrastructure. We knew what we wanted it to do—it was figuring out how to do that.”

   We also wanted to make sure that the solution would support us for years to come, not just a year or two. By the time we were done, we were pretty confident with the decision that we made. Overall we feel the solution was appropriate for Medical Mutual.   

– Eleanor Danser, EDI Manager, Medical Mutual of Ohio

And because Danser’s team wanted the system to work the same way as the existing infrastructure, heavy customization was also needed. “There was a lot of homegrown code that went into the process,” she adds.

Project realizes cost savings, increased efficiency

Since the implementation, Medical Mutual reports real cost savings and increased efficiency. As was the goal from the beginning, the company can now more easily take on trading partners. According to Danser, the use of IBM WebSphere DataPower creates an infrastructure that greatly improves the time needed to set up those trading partner connections, including a recent connection with the Federal Exchange. Medical Mutual is now able to shorten testing with trading partners’ and move data more quickly.
“Before, it would take weeks to [take on a new partner], and now we are down to days,” says Danser.
“We’re not restricted to just the EDI transactions anymore,” she continues, explaining that Medical Mutual’s infrastructure is now not only more robust, but more flexible. “We can use XML [Management Interface] and tools like that to move data also.”
IBM WebSphere DataPower additionally moved Medical Mutual from batch processing into a real-time environment. The new system gives trading partners the ability to manage their own transactions and automates the process into a browser-based view for them, so onboarding new partners is now a faster, more scalable process.
Additionally, Medical Mutual has been able to significantly reduce transaction fees for claims data by going direct with clearinghouses or other providers. According to Danser, Medical Mutual expects an annual savings of $250,000 to $500,000 in transactional fees.
Photo courtesy of Flickr contributor Michael Roper

White Paper: Four-Quadrant Analysis

Prepared By: Cindy Gregoire, TxMQ Practice Manager, Middleware & Application Integration Services

Abstract

The principle in developing solutions that brings business value for multiple business units called four quadrant analysis.

Queries
  • Are you having difficulty getting your SOA off the ground?
  • Are business initiatives dragging down your infrastructure with lots of low-quality web services that you wouldn’t even consider for reuse?
  • Are you able to realize your SOA investments with rapid development and high business value and high acclaim from your business counterparts?
Using Service Oriented Architecture

It may be time to consider changing your requirements gathering process.
There is a principle in developing solutions that bring business value for multiple business units called four quadrant analysis. This analysis involves the interview and collection of information gathering into four distinct categories or quadrants that help to bring together a more complete architecture framework for completing business process management initiatives leveraging your middleware services and application provisioning within the framework of a service oriented architecture.
Within a service oriented architecture (SOA), the style of developing and deploying applications involves an assembly of reusable components which are modified for a new purpose – the goal of which is to minimize development efforts, thereby mimizing the extensive needs for testing, and therefore resulting in rapid delivery of business solutions. Delivery of business automation involves the implementation of software to perform work upon business data. In most cases, it also involves the routing of decisions, inputs, or outputs to various user groups who operate independently from one another or may involve service providers or external business entities.

SOA Requirements

In implementing a SOA, requirements may originate from a number of sources making the job of the business analysis even more difficult as analysts attempt to identify and define what is needed to realize the SOA investments for flexibility, reuse, and speed-to-market.
Functional requirements obtained through agile development efforts tend to have a myopic focus on the user interfaces as requirements become known during an iterative process between application users and “agile” developers leaving BAs drowning in a mire of minute detail around the many options of where, how, and when to display fields, fonts, and typestyle themes.
Requirements that are defined through the Agile Methodology, tend to be end-user focused with “look and feel” playing the higher priority than efficiency of code, system performance, or keystroke/user efficiency behaviors. Such requirements are labeled “non-functional” and generally become background information as projects get closed out based only on meeting requirements created based on the application web graphical user interfaces or originating on one of the many portal technologies.
Portal technologies are patterned for language and usage patterns which can be modified quickly providing a unique look and feel for user groups. Department portals may be modified without development code and optimized for the user groups accessing them, however, storage of business data and critical information can cause problems across the enterprise as decisions are made on an application-by-application basis (as they often were pre-SOA days).
The issue comes in as data, process, and procedures now vary considerably by business process or department with information needing to be supplemented, indexed, or compared against metrics for proper handling or escalation surfacing all over the organization in portals, applications, Excel spreadsheets, Access databases, etc. whereby analysis data is not maintained or shared beyond an individual or department level source – and may not even be known by others outside the department or business function.
As a result, simple changes to the organizational structure can result in major loss of critical data (wiped off an end user’s desktop), immediate retirement of business assets (through non-use), or the requirement for entire groups to “re-tool” (subjecting the company to further inefficiencies and delays) to require use of only certain applications or interfaces used by priority groups.

Leveraging the Flexibility of SOA

In an environment such as the above, how do you analyze the functional and non-functional requirements required to effectively leverage the flexibility and capabilities of the emerging SOA technologies without causing continual chaos in your service delivery?
How do you recognize applications which should be designed as common services across the enterprise? How do you manage proliferation of applications that all handle similar data – duplicate data – and end up requiring you to host large farms of application servers hosting undocumented applications with unknown owners handling data when you are not sure what the applications are, who is using them, or what they are being used for?

The Y2X Flowdown

Enter: Four Quadrant Analysis – a new perspective on information gathering and requirements modeling that addresses issues introduced by the SOA space leveraging Six Sigma. This technique involves first and foremost, interviewing key stakeholders of your SOA initiatives for first and foremost, prioritizing the key objectives for the company’s SOA framework using the Y2X FlowDown tool.
The Y2X FlowDown is a Six Sigma developed technique that organizes the project deliverables, identifies dependencies, and creates a visual diagram from stakeholder input for the identification of project stages and measurable objectives that will be realized over the completion of one or more projects. During the Y2X FlowDown meeting, it may become apparent that some of the project objectives will not be realized during initial phases of a SOA. This is an important step during project initiation to ensure expectations are being managed realistically and in the appropriate context of cost management.
As a result of the Y2X FlowDown analysis, it may be found that additional software or major investments may be required to “measure” the successes of the project. Additionally, it may become clear through the flow down discussion that not all needed stakeholders have been identified or involved in the discussions. The value of the Y2X FlowDown process is twofold: (1) getting stakeholders on the same page with project outcome expectations, and (2) the Y2X FlowDown diagram, and example of which is noted below in figure 1 – which will become a reference point referred to again and again during each project, and at beginning and end of project phases as a “roadmap” of expectations for the project.

The Y2X flowdown process unites the SOA sponsors with the infrastructure and development teams in an outcome-focused planning effort which is not distracted by the details of the project delivery. It simply answers the question, what will be delivered, how will success be measured, and when can I expect delivery?
Once the Y2X Flow Down diagram has been created for each initiative going forward, the specific objectives and how they will be measured become a regular discussion point using the diagram as a key project management tool between the various groups that will be impacted by the new SOA, development approach, and process for rolling out business services.
From this point forward, your service delivery should be governed by a process which prioritizes and leverages the SOA components and patterns which are performing well within the SOA. This is key for manageability and obtaining the promises of SOA.

The Role of Business Analysis

While business initiatives and infrastructure projects are being identified and prioritized, SOA business analysts are re-assessing critical business processes for process improvement opportunities. If they are not working on continual process improvement, focus might be on the wrong things and many companies make this mistake.
They either hire BAs who are not technical enough to understand what they are mapping requirements to, or BAs that are too myopic, not able to focus on how services can leverage other services to become composite services. Instead they focus on end-user requirements while ignoring critical details such as throughput, error handling, escalation and routing of exceptions where such frameworks become the basis for a Business Process Management tool optimization.
The role of the business analyst is to quantify the “as is” the “to be” and to prioritize the value of filling the “gaps”. Outputs may consist of activity diagrams, SIPOC diagrams, requirements models, and data flow diagrams. Many companies are implementing “swim lane” diagrams to portray the business process, however, these diagrams and information become difficult to review, and can even become a source of confusion when a change in business flow occurs, or a new packaged application is purchased.
If start and stop points, inputs and outputs of each business process are not easily identified, process improvements are impossible to identify and the business process is then mapped to how the “new” application works, creating yet another source of information overload for business users when the next business service is rolled out. This common business approach is responsible for creating even more sources of duplicate data across the enterprise, with multiple groups performing similar tasks (duplicity), and can lead to major out-of-synch data and data management nightmares.
It is at this juncture that many companies begin to take SOA governance seriously as a business service development approach. It should be considered that such confusion resolution must belong to the BA role. Problems that are created because of duplicate data, duplicate functionality within applications, and business functions being performed in multiple places in the organization – those are all problems that require solving by a BA.

Four Quandrant Analysis: Business Analysis

The tool of the Technical Business Analyst (TBA) is the collective assessment and evaluation of technical requirements into these four quadrants. The TBA is a role which is commonly being filled by what is more recently known as the Enterprise Architect as enterprise level tools, data, and services are being defined for the sake of leveraging a common toolkit across the enterprise. Such efforts result in introducing ERP technologies such as SAP, or outsourced functions such as payroll and accounting. The TBA is able to approach the organization holistically by building a visual that helps the organization to understand its use of technology with the understanding that businesses run effectively because of people following procedures, accessing tools which use corporate data – whether such staff are internal, partners, or outsourced.
These are the four quadrants in this model: People, Procedures, Tools, Data. There is another element in the model to bridge where there are groups that use the same data – those points are “bridged” by what we call Architecture which visually speaking can take the form of figure 2.

For each department in the company, these four elements are assessed by the TBA – starting with the critical business process activity diagrams. The TBA becomes quickly knowledgeable about which department require what information, using which tools (applications and services), to complete which procedures (business processing). This approach prevents “compartmentalization” whereby the focus is quickly lent over to a specific need (where the squeeky wheel gets the grease), rather than its relevance to particular business functions which all have quantifiable value to the overall business process.
This fundamental visual of the organization provides the “connectors” to be quickly identified between departments, between roles within the organization and escalations within a business process, critical dependencies, need for SLA and time-critical processing, and need for data maintenance standards to control and manage enterprise data. The “bridge” between staff and tools is the “keystone” or what we commonly refer to in IT as “architecture”. This distinct approach leads to the detection of existing assets and modifying them for multi-use functions, preserving integrity of data, decreasing the need for data maintenance, and keeping controls in place that regulate efficiencies. When incorporated into business requirements analysis, there are few “dropped balls” or missing requirements when all four quadrants are addressed methodically.
Where connectors are identified, the need for architecture to quickly bridge between quadrant elements, mapping of capability to applications and services, while quickly providing the most effective use of your cornerstone technologies for an organization are key inputs to the enterprise architecture which enables rapid service deployment across business units.
This approach can also speed the creation of the critical inventory of enterprise assets that can be reviewed for optimizing vendor relationships, consolidation, and allow you to realize exponential cost savings as you streamline your assets, redesign and realize your greatest business optimization ever.


If you are interested in revitalizing your business requirements gathering process – Contact Us

Webinar: Middleware Monitoring & Management

This informative webinar, presented by TxMQ, offers both a macro and micro look into the middleware space, with the goal to better inform participants about how to monitor their middleware applications and manage their capacity planning.

  • How day-to-day monitoring mindset affects tools and implementation
  • Options for base-level and application-level monitoring
  • IHS / Apache
  • DB2 / Oracle/ SQLServer
  • Server / VM / LPAR
  • Day-to-day monitoring for availability and performance
  • Growing and managing your infrastructure
  • Capacity planning for growth
  • Source and summary data
  • Tool concerns
  • Best practices

Webinar: The Value of Third-Party System HealthChecks

Have you considered the value of inviting a third party to review your middleware infrastructure? TxMQ’s HealthCheck service provides a technical evaluation of your middleware environment that includes measures of:

  • Performance
  • Scalability
  • Configuration
  • Security

TxMQ also evaluates your implementation from a best-practices standpoint. Use this evaluation to compare your company to current standards, as well as to your peers and competitors.

Case Study: Middleware Health Check

Project Description

An online invoicing and payment management company (client) requested a WebSphere Systems Health Check to ensure their systems were running optimally and prepared for an expected increase in volume. The project scope called for onsite current state data collection and analysis followed by offsite detailed data analysis and White Paper preparation and delivery. Next, our consultant performed the recommended changed (WebSphere patches and version upgrades).

Click here to learn more about our Middleware Health Check.

The Situation

The client described their systems as “running well” but they were concerned they may have problems as they experienced additional load (700% estimated); memory consumption was a particular concern. TxMQ completed the Health Check and worked with representatives from the client for access to production Linux boxes, web servers, application servers, portal servers, database servers and LDAP servers. Additional client representatives would be available for application-specific questions.

The Response

Monitoring of the components had to be completed during a normal production day. The web servers, application servers, database server, directory server and Tomcat server all needed to be monitored for several hours. The normal production day typically showed a low volume of transactions so the when the monitoring statistics began they were all very normal; resource usage on the boxes was very low. Log files were extracted from the web servers, directory server, database server, deployment manager, application servers and Tomcat (batch) server. Verbose garbage collection was enabled for one of the application servers for analysis and a Javacore and Heap Dump was generated on an application server to analyze threads and find potential memory leaks.

Monitoring and analysis tool options were discussed with the client. TxMQ recommended additional IBM tools and gave a tutorial on the WebSphere Performance Viewer (built into the WebSphere Admin Console). In addition, TxMQ’s consultant sent members of the client’s development team links to download IBM’s Heap Analyzer and Log Analyzer (very useful for analyzing WAS System Logs and Heap Dumps). TxMQ’s consultants met with the client’s development and QA staff to debrief them on the data gathered.

The Results

Overall, the architecture was sound and running well but the WebSphere software had not been patched for several years and code-related errors filled the system logs. There were many potential memory leaks which could have caused serious response and stability problems as the application scales for more users.

The QA team ran stress tests which indicated that the response times would get worse very quickly as more users were added. Further, the version of software and web server plugin was 6.1.0 and vulnerable to many security risks.

The HTTP access and error logs had no unusual or excessive entries. The http_plugin logs were very large and were rotated – making it faster and easier to access the most recent activity.

One of the web servers was using much more memory than the other although it should have been configured exactly the same. The production application servers were monitored over a three-day period and didn’t exhibit any outward signs of stress; the CPU was very low, memory was not maxed out, and the threads & pools were minimally used. There were a few configuration errors and warnings to research but the Admin Console settings were all well within norms.

Items of concern:

1) A large number of application code related errors in the logs; and
2) The memory consumption grows dramatically during the day.

These conditions can be caused by unapplied software patches and code-related issues. In a 24-hour period, Portal node 3 experienced 66 errors and 227 warnings in the SystemOut log and 1396 errors in the SystemErr log. These errors take system resources to process, will cause unpredictable application behavior, and can cause hung threads and memory leaks. The production database server was not stressed – it has plenty of available CPU, memory and disk space. The DB2 diagnostic log had recorded around 4536 errors and 17,854 warnings in the previous few months. The Tivoli Directory server was not stressed – plenty of available CPU, memory and disk space. The SystemOut log recorded 107 errors and 8 warnings in the previous year -­ many of these could be fixed by applying the latest Tivoli Directory Server patch (6.1.0.53). The Batch Job (Tomcat) server was not stressed – plenty of available CPU, memory and disk space. The catalina.out log file is 64Mb and contained many errors and warnings.

The HealthCheck written analysis was delivered to the client with recommended patches and application classes to investigate for errors and memory leaks. In addition, a long-­term plan was outlined to upgrade to a newer version of WebSphere Application Server and migrate off WebSphere Portal Server (since its features were not needed).

Photo courtesy of Flickr contributor Tristan Schmurr

Case Study: WAS Infrastructure Review

Project Description

Large financial services firm (client) grew and began experiencing debilitating outages and application slowdowns – which were blamed on their WebSphere Application Server and entire WebSphere infrastructure. The client and IBM called TxMQ to open an investigation, identify what was causing the problem, and determine how to put systems, technology and a solution in place to prevent the problems from recurring in the future, while at the same time allow the client to scale for continued planned growth.

The Situation

As transaction volume increased, and more locations came online, the situation got worse and worse, at times, actually shutting down access completely at some sites. TxMQ proposed a one-week onsite current state analysis to be followed by one?week configuration change testing in the QA region and then one week of document preparation and presentation.

The Challenge

The primary function of the application is to support around 550 of the company locations financial processes; the average number of terminals in a location is around five, so more than 2,500 connections are possible. Our client suspected the HTTP sessions were large and thus interfering with their ability to turn on session replication. The code running on the multiple WebSphere Application servers was Java/JEE with a Chordiant framework (an older version not currently in support). There are 48 external web services including CNU and Veritec – mostly batch-oriented. The Oracle database was running on an IBM p770 and was heavily utilized during slowdowns. Slowdowns could not be simulated in the pre-­production environment; transaction types and workflow testing had not been automated (a future project would do just that).

The Response

TxMQ’s team met with members of the client’s WebSphere production environment. There were two IBM HTTP web servers with NetScaler as the front-­end IP sprayer; the web servers ran 3-­5% CPU and were not suspected to be a bottleneck. The web servers round robin to multiple WebSphere Application Servers configured the same way – except two servers had a few small additional applications. The application servers ran on IBM JS21 blade servers which were approximately 10 years old. Some recent diagnostics indicated a 60% session overhead (garbage collection), so more memory was added to the servers (total 12 GB per server) and the WebSphere JVM heap size was increased to 4GB; some performance improvement was realized. The daily processing peak times were from 11 am – ­1 pm and 4 – ­5 pm with Fridays the busiest. Oracle 11g served as the database with an instance for operational processing and a separate instance for DR and BI processing; the client drivers are version 10g. Our team met with client team members to discuss the AIX environment. The client ran running multiple monitoring tools, so some data was available to analyze the situation at multiple levels. The blade servers were suspected to be underpowered for the application and future growth. Our consultants learned that there was an initiative to upgrade the servers to IBM Power PS700 blades planned in the first quarter of the next year. The client also indicated that the HTTP sessions may be very large and the database was experiencing a heavy load possibly from un-tuned SQLs or DB locks.

The Results

TxMQ’s team began an analysis of the environment, including working with the client to collect baseline (i.e. normal processing day) analysis data. We observed the monitoring dashboard, checked WAS settings and collected log files and Javacores with the verbose garbage collection option ‘on.’ In addition, we collected the system topology documents. The following day, TxMQ continued to monitor the well-­performing production systems, analyzed the systems analysis data collected the previous day, and met with team members about the Oracle Database. TxMQ’s SME noted that the WAS database connection pools for Chordiant were using up to 40 of the 100 possible connections, which was not an indication of a saturated database. The client explained that they use Quest monitoring tools and showed the production database current status. The database was running on a p770 and could take as many of the 22 CPUs as needed -­ they had seen up to 18 CPUs used on bad days. The client’s DBAs have a good relationship with the development group and monitor resource utilization regularly – daily reports are sent to development with the SQLs consuming the most CPU. No long-­running SQLs were observed that day; most were running fewer than 2­?5 seconds. Our SME then met with the client’s middleware group and communicated preliminary findings. In addition, he met with the development group, since they had insight into the Chordiant framework. Pegasystems purchased Chordiant in 2010 and then abandoned the product. The database calls were SQL, not stored procedures. The application code has a mix of Hibernate (10%), Chordiant (45%), and direct JDBC (45%) database accesses. Large HTTP session sizes were noticed and the development group noted that the session size could likely be reduced greatly. The client’s application developers didn’t change any Chordiant code -­ they are programming to the APIs. The developers used RAD but have not run the application profiler on their application. Rational Rose modeler provided the application architecture (business objects, business services, worker objects, service layer). In addition, the application used JSPs but was enhanced with Café tags.

Applications worthy of code review/rewrite included the population of GL events into the shopping cart during POS transactions. On the following day the slowdown event occurred; by 10:00 am all application servers were over 85% CPU usage and the user response times were climbing over 10 seconds. At 10:30 am and again at 12:15 pm database locks and some hung sessions were terminated by the DBAs. The customer history table was getting long IO wait times. One application server was restarted at the operations group request. The SIB queue filled up at 50,000 (due to CPU starvation). A full set of diagnostic logs and dumps were created for analysis. By 4:30 pm the situation had somewhat stabilized. TxMQ SME’s observed that the short-­term problem appeared to be a lack of application server processing power and the long-­term problems could best be addressed after dealing with the short-­term problem. They recommended an immediate processor upgrade. Plans were made to upgrade the processors with a backup box. Over the weekend the client moved their WebSphere application servers to a backup p770 server. A load similar to the problem load was experienced again several days later and it was reported that the WAS instances ran around 40% CPU load and user response times were markedly better than Friday.

Recommendations

TxMQ presented a series of recommendation for the client’s executive board, including but not limited to:

Chordiant framework and client application code should be the highest priority due to the number and type of errors.

The Chordiant framework has been made obsolete by Pegasystems and should be replaced with newer, SOA-­friendly framework(s) such as Spring and Hibernate.

Review of application code. There are many errors in the error logs, which can be fixed in the code.

Reduce the size of the HTTP Session. Less than 20K is a good target.

WAS 6.1 should be upgraded to Version 7 or 8 before it goes out of support.

IBM extended support or a third-­party (TxMQ) support is available

Upgrade may not be possible until Chordiant framework replaced

Create a process for tracing issues to the root cause. An example would be the DB locks, which had to be terminated (and some user sessions terminated). These issues should be followed up to determine the cause and remedial actions.

Enhance the System Testing to emulate realistic user loads and workflows. This will allow more thorough application testing and gives the administrators a chance to tweak configurations under production-­like loads.

Photo courtesy of Flickr contributor “Espos”