Why Should I Use An IBM Business Partner?

Wouldn’t it make more sense to deal directly with IBM?

This is a topic of much discussion around the TxMQ office these days, as it is at other solutions providers. Prospective customers often ask us: If my company’s able to go direct with my manufacturer/vendor (be it IBM, Microsoft, Oracle, or whomever), why shouldn’t we?
It’s a fair question. Companies, especially large ones, oftentimes have multilayered supply-chain relationships for acquiring software, hardware, and talent.
On the one hand, long-standing, embedded relationships often dictate the way a company acquires technical solutions. A friendship nurtured through years of trusted business dealings – sometimes called a “trusted advisor” relationship – may be the perfectly legitimate and ideal way of solving technical challenges. Yet sometimes things change. What happens when your salesperson – the same salesperson who’s covered your company for years – resigns?
Back to the broader topic of direct or business partner: As software and hardware companies like IBM and the other majors evolve over time, their business models push more sales through channel partners. Business partners are inevitably smaller companies, and are typically far better equipped to build and maintain longstanding, deep customer relationships. Sales teams at the major vendors change, oftentimes annually, leading to spotty coverage of accounts, and occasionally even leaving some companies with no direct coverage at all.
Business partners typically offer more consistency and stability of coverage. In addition, as IBM and the majors continue to add layers of complexity to their brands, and shift products in their portfolios, it’s very difficult for companies (and even their field salespeople), to keep products straight.
Surprisingly, this is a knowledge area in which partners tend to excel. Most of the leadership at business partners were themselves former employees of the majors. They left the majors to explore greater freedom to engage with customers, and to build deeper, better relationships over longer periods of time without the bureaucracy that comes with working in a large shop, or the threat of annual account “realignment.”
Also, as solutions offered to customers become more complex and cross over traditional brand borders, business partners are better able to navigate these tricky waters.
Recently IBM, along with other majors, went through internal realignments that left salespeople covering products new to them, and others were shifted to entirely new lines of business. Not so with business partners, who are free to engage as they always have.
What about software and hardware sales? Logic says it must be cheaper to buy direct. No indeed. IBM, as one example, has dramatically shifted internal teams, and reduced field and inside sales coverage to better align their resources with today’s market. What does this mean? It means it’s usually cheaper to order software and hardware from a partner. IBM knows it’s very expensive to have a large, geographically dispersed sales team, and far more cost-effective to let IBM business partners sell more and more software, services and hardware for them.
Partners therefore have full access to special bid requests, discounting, plus any and all sales tools a direct seller has in the arsenal. In addition, it goes without saying that a business partner’s services rates are nearly always below IBM direct rates.
Conclusion
In the end, each company must decide what’s best for itself, but don’t presume that the way you engaged with IBM and others in the past is the best way to engage in the future. A business partner can be your best ally to stay current with technology, and enable the nimble, robust infrastructure your company needs to compete and win in today’s marketplace.
Let’s take this conversation a bit further – email me at chuck@txmq.com.
(Image by Flazingo Photos)
 
 

How Do You Support Your Software?

Software/OS services & support not your core competencies? We support more than just WebSphere.

In today’s reality of constantly evolving technology, managing software support is critically important. There are never-ending changes to core products, changes to deployment options (on-premises, cloud, hybrid), and a new generation of changes is most certainly lurking around the corner. Especially in IBM’s Systems Middleware world.

The effort to support this endless hodgepodge has grown increasingly complicated. In point of fact, most companies run multiple versions of software on different operating systems, which makes support options even more confusing. That’s why more and more companies are facing hard decisions about whether to continue their vendor-support agreements for software and OS, and why more and more companies are running unsupported software and OS, even if it’s occasionally side-by-side with a newer, supported version of the same.

That’s why more and more companies are partnering with TxMQ for support of their IBM environments, WebSphere systems and far more.

TxMQ is uniquely able to design and deploy support solutions across almost any software/OS environment. And we’ll design a solution that fits right – whether it’s short-term support until the patch/upgrade is finished, or long-term permanent outsource options and partnerships. All options are available 24/7/365.
In addition, we’re one of the few firms that supports legacy mainframe, systems I and P (aix), alongside Linux (all major flavors), Solaris, Windows and other variants. So yes, we’re a lot more than just pure IBM.
A few other notes about our capabilities…

Additional Offerings
TxMQ support customers can also take advantage of reduced rates for TxMQ services, discounted purchasing of IBM software and hardware, and related services like software asset management (ILMT, SUA, SCCD), patch management and other managed services.

Implementation Support
TxMQ’s deeply technical talent can also help with planning for upgrades, replatforming, license optimization and integration services. In select cases, we can also work with in-house development teams to offer support for custom home-built applications.

Custom Solutions
TxMQ engineers and developers can work for you, or with your teams, for custom-application development needs. As with the above services, customers under support agreements with TxMQ are entitled to these services at a discount.
Let’s start a conversation on the advantages of getting back to basics and focusing on your core competencies, then letting TxMQ worry about your support.
Email info@txmq.com or chuck@txmq.com for more information, or call 716-636-0070 x222.

How (NOT) To Buy Enterprise Software

Whether you’re in IT, or in a line of business (LOB), at some point in your career you’ll likely be given a budget with authority to acquire enterprise software or an integrated solution for your company.
You and your team will do an analysis, possibly have a “bakeoff” of some sort to eliminate a vendor or two, and ultimately make a selection of what you believe to be the right enterprise solution.
Maybe it’s a cloud-based black-box type of solution like Workday or Salesforce. Maybe it’s a platform product, like WebSphere Application Server or SharePoint, used to support other solutions. Maybe it’s none of the above. Regardless, the proposal will inevitably include a component to stand up and install supporting services, plus after-support.
Do yourself a favor: Spend the time during your internal evaluation to ask your team and your leadership if you truly have budget to extend beyond the basic enterprise software acquisition cost.
Here at TxMQ, we’ve noticed a trend the past few years, and it’s a challenging one. We see more and more companies slash budgets for services other than the bare license cost of the software. That usually means the company’s left with acquired software products they’re not necessarily able to stand up themselves, let alone support and integrate.
In many cases this isn’t so problematic. After all, some solutions are certainly straightforward enough. Yet even cloud-based tools like Salesforce are, in fact, extremely complex systems that require extensive pre-planning, integration and ongoing support. This role can be tough to manage internally, and is oftentimes better suited to a solutions provider like TxMQ.
TxMQ has helped countless companies fix bad or poorly planned installations of enterprise software – installations that went south because budget was restricted solely to the license. We’ve seen outages, lost revenue and actual firings due to poor planning – again, only because budget was cut to the bare minimum and only covered the software-license cost. And the corrective engagements are costly – for the demands on internal staff and dollars spent on consultants. In nearly all cases, these costs could have been avoided with upfront planning for the installation and deployment of the solution.
By planning, I mean understanding internal needs, skills, integration points, storage needs, security, networking and more.
It was Abraham Lincoln who was famously quoted as stating: “If I had 6 hours to cut down a tree, I’d spend the first 4 sharpening the axe.” If you’re being asked to acquire and install a solution – whether it’s enterprise software, hardware or hybrid – don’t just grab the axe and start swinging. You’ll hurt yourself, likely break the axe and end up with a very damaged tree.
Save a tree. Email TxMQ today.

I'm A Mainframe Bigot

I make no apologies for my bigotry when I recommend mainframes for the new economy. Dollar for dollar, a properly managed mainframe environment will nearly always be more cost effective for our customers to run. This doesn’t mean there aren’t exceptions, but we aren’t talking about the outliers – we’re looking at the masses of data that support this conclusion.
To level-set this discussion: If you’re not familiar with mainframes, move along.
We aren’t talking about the Matrix “Neo, we have to get to the mainframe” fantasy world here. We’re talking about “Big Iron” – the engine that drives today’s modern economy. It’s the system where most data of record lives, and has lived for years. And this is a philosophical discussion, more than a technical one.
I’d never say there aren’t acceptable use cases for other platforms. Far from it. If you’re running a virtual-desktop solution, you don’t want that back end on the mainframe. If you’re planning to do a ton of analytics, your master data of record should be on the host, and likely there’s a well-thought-out intermediate layer involved for data manipulation, mapping and more. But if you’re doing a whole host (pun intended) of mainstream enterprise computing, IBM’s z systems absolutely rule the day.
I remember when my bank sold off its branch network and operations to another regional bank. It wasn’t too many years ago. As a part of this rather complicated transaction, bank customers received a series of letters informing them of the switch. I did some digging and found out the acquiring bank didn’t have a mainframe.
I called our accountant, and we immediately began a “bake off” among various banks to decide where to move our banking. Among the criteria? Well-integrated systems, clean IT environment, stability (tenure) among bank leadership, favorable business rules and practices, solid online tools, and of course, a mainframe.
So what’s my deal? Why the bigotry? Sure, there are issues with the mainframe.
But I, and by extension TxMQ, have been doing this a long time. Our consultants have collectively seen thousands of customer environments. Give us 100 customers running mainframes, 100 customers who aren’t, and I guarantee there are far more people, and far greater costs required to support similar-size adjusted solutions in non-mainframe shops.
Part of the reason is architecture. Part is longevity. Part is backward-compatibility. Part is security. I don’t want to get too deep into the weeds here, but in terms of hacking, unless you’re talking about a systems programmer with a bad cough, the “hacking” term generally hasn’t applied to a mainframe environment.
Cloud Shmoud
Did you know that virtualization was first done on the mainframe? Decades ago in fact. Multi-tenancy? Been there, done that.
RAS
Reliability, Availability and Serviceability define the mainframe. When downtime isn’t an option, there’s no other choice.
Security
Enough said. Mainframes are just plain more secure than other computer types. The NIST vulnerabilities database, US-CERT, rates mainframes as among the most secure platforms when compared with Windows, Unix and Linux, with vulnerabilities in the low single digits.
Conclusion
I had a customer discussion that prompted me to write this short piece. Like any article on a technology that’s been around for over half a century, I could go on for pages and chapters. That’s not the point. Companies at times develop attitudes that become so ingrained, no one challenges them, or asks if there’s any proof. For years, the mainframe took a bad rap. Mostly due to very effective marketing by competitors, but also because those responsible for supporting the host began to age out of the workforce. Kids who came out of school in the ’90s and ’00s weren’t exposed to mainframe-based systems or technologies, so interest waned.
Recently, the need for total computing horsepower has skyrocketed, and we’ve seen a much-appreciated resurgence in the popularity of IBM’s z systems. Mainframes are cool again. Kids are learning about them in university, and hopefully, our back-end data will remain secure as companies realize the true value of Big Iron all over again.

Upgrade Windows Server 2003 (WS2003) – Do It Today

Another day, another end-of-support announcement for a product: On July 14, 2015, Windows Server 2003 (WS2003) goes out of support.
Poof! Over. That’s the bad news.
What’s the upside? Well, there isn’t really an upside, but you should rest assured that it won’t stop working. Systems won’t crash, and software won’t stop running.
From the standpoint of security, however, the implications are rather more dramatic.
For starters, this automatically means that anyone running WS2003 will be noncompliant with PCI security standards. So if your business falls under these restrictions – and if your business accepts any credit cards, it certainly does – the clock is ticking. Loudly.
There’ll be no more security patches, no more technical support and no more software or content updates after July 14, 2015. Most importantly, this information is public. Hackers typically target systems they know to be out of support. The only solution, really, is to upgrade Windows Server 2003 today.
TxMQ consultants report that a large percentage of our customers’ systems are running on Windows Server, and some percentage of our customers are still on WS2003. There are no terms strong enough terms to reinforce the need to get in touch with TxMQ, or your support vendor, for an immediate plan to upgrade Windows Server 2003 and affected systems.
Server migrations oftentimes take up to 90 days, while applications can take up to 2 months. Frankly, any business running WS2003 doesn’t have 60 days to upgrade, let alone 90. So please make a plan today for your migration/upgrade.

The Need For MQ Networks: A New Understanding

If I surveyed each of you to find out the number and variety of technical platforms you have running at your company, I’d likely find that more than 75% of companies haven’t standardized on a single operating system – let alone a single technical platform. Vanilla’s such a rarity – largely due to the many needs of our 21st-century businesses – and with the growing popularity of the cloud (which makes knowing your supporting infrastructure even more difficult) companies today must decide on a communications standard between their technical platforms.
Why MQ networks? Simple. MQ gives you the ability to treat each of your data-sharing members as a black box. MQ gives you simple application decoupling by limiting the exchange of information between application endpoints and application messages. These application messages have a basic structure of “whatever” with an MQ header for routing destination and messaging-pattern information. The MQ message become the basis for your inter-communication protocols that an application can access no matter where the application currently runs – even when the application gets moved in the future.
This standard hands your enterprise the freedom to manage applications completely independent of one another. You can retire applications, bring up a new application, switch from one application to another or route in parallel. You can watch the volume and performance of applications in real-time, based on the enqueuing behavior of each instance to determine if it’s able to keep up with the upstream processes. No more guesswork! No more lost transactions! And it’s easy to immediately detect an application outage, complete with the history and how many messages didn’t get processed. This is the foundation for establishing Service Level Management.
The power of MQ networks gives you complete control over your critical business data. You can limit what goes where. You can secure it. You can turn it off. You can turn it on. It’s like the difference between in-home plumbing and a hike to the nearest watersource. It’s that revolutionary for the future of application management.

Case Study: Client experiences WebSphere Business Integrator Outage

Project Description

Regional grocery chain (CLIENT) experienced outage in their WebSphere Business Integrator (WBI) application. WBI is no longer supported and the application developer was no longer available.

The Situation

This CLIENT was using an older WebSphere® product, WebSphere Business Integrator (WBI) any they had developed an application called Item Sync (developed by IBM Global). Item Sync was not working properly and the CLIENT needed to take steps to correct.
The application flow is work initiated by vendors and that work is visible in the Vendor Transaction List (VTL) screen. The WBI application is responsible for routing the work to the next step in the approval process. This routing was not occurring and was part of the overall problems.
The other side of the problem was that the sync application MQ Archive queue filled up. Because it was filled up, it was not accepting new messages. The initial corrective steps taken by the customer was to purge the archive queue. They then proceeded to reboot the WBI server; the MQ collectors were verified as active and in their correct state and sequence. New work was then being seen in the VTL screen, but it was not being seen in the next step.

The Response

One week earlier the CLIENT had experienced problems when its archive queue became full and workflow process was not working. They proceeded to purge the queue, which included all messages. As these messages were not persistent the messages had not been logged and were therefore lost.
As part of the initial response the CLIENT proceeded to reboot the WBI server and the MQ collectors were verified as running in their correct state and sequence.
A conference call between TxMQ and the CLIENT was conducted. During this call, in addition to hearing the issue, TxMQ’s consultant also made a recommendation to restore the backed up configuration and determine if the connectivity for the workflow processing would begin working. During this call the recommendation was made to begin using the native MQ alerts which provide an early warning system in the event of problem like queue filling.
The same evening after the conclusion of the conference call, the CLIENT proceeded to restore the configuration, which included MQ, connectors and WBI. The CLIENT then tested the environment by bringing up the environment and testing. The environment came up and was operational.
The Queue depth on the archive queue had also been expanded so that the issue with the archive queue filling would not happen again.
The next morning, the CLIENT and TxMQ reconvened. The current situation had been noted and the last steps included recreating the messages deleted from the archive queue. The customer was looking for TxMQ to help recreate the lost application messages. Unfortunately since TxMQ was unfamiliar with the application schema and process, the restore for the application messages would be better left to the business owners.

The Results

In this scenario, queues should not be purged in an overflow condition. The correct action to would have been to copy the messages to a backup queue or file system to be replayed later.
Before steps were taken to partially initiate a fix, care should have gone into making sure that it was a complete fix that would work.

Whitepaper: 2014 BPM Products – Mature But Not Equal

Project Description
Business Process Management or BPM is a hot topic in 2014. With escalating costs of employee benefits, coupled with the ongoing flat or barely growing economy; finding new ways to reduce costs and personnel-effort is a primary goal for many of our customers.
BPM software has reached a level of maturity in the marketplace that encompass many options and variety of capability for customers to choose between. Gartner’s magic quadrant scatters a number of top vendor products having many similar attributes, but varying levels of integration capability and need for customization.
EXAMINING ROI
When evaluating BPM products, tailoring business requirements around adoption, and the life expectancy of the product are imperative in attaining True Cost of Ownership. In addition, an evaluation of ongoing support costs, including training, and retaining/recruiting skilled workers to support the products, must be included in any cost discussion. Building applications from scratch (do-it-yourself approach) may result in lower up-front costs, but such solutions are generally able to realize less than 75% of the functional business process requirements. All the while, they regularly fail to eliminate overhead costs associated with approximately 25% of exceptions and exception handling (those not cost-effective to automate).
Do-it-yourself approaches are likely to require reworking, or even complete rewriting – contributing to failure in meeting the business objective for cost reduction around business process automation. When businesses are starting from zero automation to 75% automation, and there’s certainty that the business process needs will not change in 5 years, the business may successfully implement a solution. However, it will likely be the cost to customize or replace the system within the same five years that will result in failure to meet ROI expectations.
Furthermore, the operational support costs and upgrades to the supporting infrastructure are rarely considered in ROI calculations – especially if memory management or performance become a problem with a new application, requiring larger hardware and software costs than initially predicted. This is an issue before the aforementioned labor costs to support non-industry standard solutions are even factored into the equation. Such costs are sure to compromise ROI, as the business experiences incremental increases in IT costs without an accompanying increase in perceived value. What is unique about BPM initiatives, is their non-functional requirements external to the business process, for functional requirements from the business. These requirements must be identified and prioritized as part of the up-front project costs and product selection criteria.
EVALUATING BPM PRODUCTS
Prior to evaluating any BPM product, BPM initiatives must have the following documentation fully prepared:
Business Requirements
Business requirements should be outlined, demonstrating a detailed process analysis with use cases representing all required functionality. The business units must prioritize use cases individually, with a plan to measure acceptance for each use case. These requirements should also include costs for the way the business operates, based on current and historical data.
Business requirements are basic to any application initiative, but BPM doesn’t stop there. Rarely are today’s businesses able to predict changes in acquisitions, mergers, compliance, business climate, or other regulatory impacts on business processes. This leads to the need for flexibility in modeling and making changes to the BPM product as a basic supposition, characterizing and differentiating BPM products from other applications that simply automate business functions.
BPM Capabilities
BPM capabilities typically include what is coined “BPEL” or business process execution language, which is used as input by BPM tools for the decisions that determine the routing or processing path within a business transaction. When automating a business process, flexibility must be kept in the forethought for ongoing change management with each business process being fully automated, keeping in line with the concept of 80% automation and 20% exception handling, which requires routing via BPM technologies. Such flexibility requires an agile business process model, for which many BPM products do not account (buyer beware). This leaves customers with the need to perform regular “rewrites” of their code, leading to costly “workarounds,” which could outweigh the initial benefits of automation if such rules are not externalized from the business modules.
BPM ESCALATION & EXCEPTION HANDLING PROCESS
Business Process Management involves channeling 80% of the work through automated mechanisms, with the left over 20% exceptions utilizing a BPM product. These remaining 20% of exceptions are then identified in terms of the business process status and escalated to humans who can act on the exception (approve it, reject it, call a customer, call a supplier, etc.). The BPM rules effectively regulate how long a pending action can stay in the system within a certain status until another escalation, notification, or exception occurs to initiate another management condition.
The process of identifying a business event, or hung transaction, automation according to accepted routing rules, and event management are part and post of the inherent BPM functions. These “standard” BPM functions are typically handled by state management inside the BPM database. Such database functionality can degrade quickly when transactions begin stacking up. It is imperative to validate the scalability of the underlying database technology. BPM databases behave differently from typical applications, since they are managing “in-flight” status information, and upon completion of the business process, the data is quickly archived and moved off the system. This requires different optimization mechanisms, which should be discussed with your DBA in light of your BPM transaction volumes.
REPORTING REQUIREMENTS
The business process automation being implemented should give heavy consideration to ongoing management and feedback to the business unit about the number of transactions processed straight through, versus exception handling, both historically (year over year) and real-time. Each business process owner will want automated reports, or possibly ad-hoc reporting capabilities, to know exact measurements and statistics about each process, likely down to specific characteristics of each transaction.
The best BPM solutions will provide mechanisms that allow for a variety of reporting capabilities, but should make reporting available through standard database queries, or by exporting data to a data warehouse where enterprise reporting tools and capability are readily available. This is an accepted approach, given that until such automation is in place, business owners rarely have detailed requirements around their reporting needs. Be certain that your product selection provides for robust and detailed reporting by date range input, and in real-time (preferably using configurable dashboards that can be customized to each business process owner). Each dashboard can then be given a URL or logon that the business process owner can use to access individual information and reports.
Many of today’s BPM products provide a “modeling” capability with “deployment” to a run-time environment. This approach delivers flexibility so that such models can be changed, tested, accepted, and deployed to a training environment on a regular application release basis. This enables business employees to regularly adopt change and process improvement. Such tools require multiple environments to enable flexibility. The days of application environments consisting of a single dev and prod instance are long gone. It’s far more complex now. Flexible architectures require dev, test, stage, train and production environments with additional needs for high volume transactional and integrated environments, in addition to performance testing and DR instances, insuring against loss of revenue in light of business or technical interruption.
PLATFORM SELECTION & INFRASTRUCTURE SIZING REQUIREMENTS
For each business process slated for automation (such determination must be based on current costs, current transaction counts and growth predictions, and/or SLA information in terms of time to complete), inputs must be evaluated for the BPM platform selection, infrastructure sizing and costs. Such sizing and platform selection should be based on solid business transaction volume projections for each use case. If the idea is to “grow” the infrastructure with the business transaction volume growth, then costs must also include the systems management software, personnel, and mechanisms for enabling a performance and capacity management process, as well as monitoring software and monitoring automation.
GAP ANAYLSIS
Some proactive work should be done to determine the “as-is” situational analysis, and to develop the envisioned “to-be” or target system that will address the needs or concerns with the current “as-is”. Once agreement has been attained on the vision going forward, the development of a gap analysis is necessary to identify the effort and costs to go from the current situation to the proposed vision. During this process, many alternatives will be identified with varying cost scenarios as well as timeline, and resource impacts. Formalizing an “Impact Statement” process may be highly valuable in identifying the costs, timeline, and adoption associated with the various ways to address gaps.
BPM product selection should always begin with a good understanding of your BPM needs. Vendors are eager to showcase their individual product capabilities and give customer references. Check out BPM trade shows, articles, websites, and request product demos. Within every IT shop, there are experienced and valued technicians with experience to help identify what went well and what didn’t go well with past BPM initiatives. Whether or not past BPM initiatives met their ROI or business goals can be difficult information to obtain, but well worth the research. Businesses should ask vendors to provide the cost savings basis for each customer, to effectively identify opportunities for realizing cost reduction with any new BPM initiative. Many vendors have developed costing formulas that can help businesses build an effective business use case scenario to drive a BPM initiative that might otherwise flounder.
In contemplating a BPM approach, consideration should be given to the product selection based on best-of-breed vendor products. Best-of-breed products typically involve a higher level of investment as the intent of these products is to integrate them as cornerstone technologies within an enterprise, with the expectation that critical business processes will be running on them. BPM tools are expensive, generally requiring a change in IT culture for adoption and integration of the BPM services into your SDLC, with centralized BPM expert(s) for ongoing support and maintenance of BPM suites.
If “best-of-breed” is outside your financial reach (i.e. approved budget), re-evaluate your business use cases and areas of savings. Building out the many BPM mechanisms for exception handling and management of state in a database with scalability and management capability is a difficult and lengthy development initiative with high risk. Open source BPM products have a higher risk of pushing the length of time for adoption and could possibly zero out your business case ROI with increased support costs, thus exchanging business personnel for more expensive IT personnel required for ongoing development and support of a customized open source solution. Of even more concern, as business process transactions increase, you alone are responsible for scalability and performance of the open source solution, which may turn into a 7x24x365 support basis.
Image provided by Enfasis Logistica Mexico

Discussion: Q&A: Cindy Gregoire

Insights Magazine Q&A with TxMQ Practice Manager Cindy Gregoire

 

For many organizations, managing a vast middleware environment is a daunting task, especially in terms of building and maintaining a dynamic, cost-effective, and secure infrastructure. TxMQ, Inc. an IBM Premier Business Partner located in Buffalo, NY, assists its clients with technical support and setting up their middleware environments to achieve a better return on investment and prepare for success.

Insights Magazine recently sat down with Cindy Gregoire, Practice Manager, Middleware & Application Integration at TxMQ, who has been in the middleware space for more than 15 years, supplying technical support to clients and assisting them with setting up their middleware environments. In this Q&A, Gregoire explains the top four infrastructure vulnerabilities organizations should look out for, the challenges clients face in middleware management and operational IT planning, the importance of embracing a 21st century information management and how to do it on a tight budget, and the benefits of developing a technology roadmap.

 

YOU’VE WORKED WITH HUNDREDS OF CLIENTS. WHAT’S A COMMON CHALLENGE THAT THEY FACE?

Cindy Gregoire: Many of our short-term engagements involve customers who have hit a brick wall and can’t move a project forward, or they’re running into a performance problem and the system is not behaving correctly, and the current staff is evaluating the problem, but are unsure of how to proceed. The problem or resource need is beginning to escalate, and the company now needs to go outside of their house talent for some external help. We start by talking through what the particular pain point is, what the resource need is, and we take the next step and put together a proposal and get that in front of the customer. Then we even put the consultant on the phone and coordinate activities.

 

WHAT’S UNIQUE ABOUT THE APPROACH?

Gregoire: [This is] fairly common in consulting practices at large, but our niche is mainframe integration middleware, where you have this infrastructure software that is somewhat nebulous and it’s having problems — but the folks that are supporting it or part of that IT shop are looking at this going, “I don’t know what the problem is; it’s Greek to me. Let’s bring in a middleware specialist; they should be able to tell us what’s going on with this system.”

So we do a lot of system Healthchecks, where we take a look at the various system and application logs, we take a look at the operating system, and we look at the statistics that the operating system is producing. Knowing what we know about middleware, we compile our findings and analysis and develop some recommendations on what could be done. We’re able to dive deep into the technology and look for other sources of configuration correction — other than just potentially throwing more memory at a server for upgrading processors, which then allows the application to just consume more memory. So rather than fixing the root cause, you’ve actually exacerbated the problem, and that’s very typical. We see that quite a bit. In the systems health check, we’ll take a look at what’s going on middleware and system-wide, we’ll create a list of recommendations and work with their technical staff to get those recommendations implemented. We free up the resources to really help customers achieve a better return on investment in their middleware infrastructure.

 

WHAT CONSIDERATIONS SHOULD ORGANIZATIONS MAKE IN TERMS OF THEIR OPERATIONAL IT PLANNING AND CONDITIONS OF EMPLOYMENT AND SECURITY STANDARDS?

Gregoire: Especially in 2013, we saw more than 1,000 breaches of company data over the Internet. Not all breaches are reported in a timely manner or even get publicity. Clearly the Target [Corporation] breach was the one that received the most attention, probably because it hit at the peak holiday season, and Target did a good job of notifying its customers and, of course, they are nationwide. So a lot of people heard about it and many people were affected. Customers are wary. The way you lay out your security standards and how you manage your e-commerce across the Internet is obviously a different animal than it was 10 years ago.

 

WHAT INDUSTRIES ARE MODELING THE RIGHT WAY TO SECURE TRANSACTIONS?

Gregoire: Ten years ago, the portion of e-commerce that was done over the Internet was so much smaller than it is today. But most businesses — for example, retailers and smaller financial management companies — can really learn a lot from the financial services industry, because banking standards in general have always been subject to very rigid security standards. Many of our retail and SMB customers suffer gaps in their understanding of the width and breadth of security topics, why things are done a certain way or the importance of security policies and governance on a project management level to ensure each new project is incorporating the appropriate security practices subject to the type of data being handled. Each industry has its own specific regulatory practices that have guidelines around the “right way” to secure transactions, like PCI compliance or HIPAA, and a lot depends on where that data exchange is occurring. A lot of the consulting we do with customers, before they even engage with us, is just talking about some of these security standards.
For example, when you look at network topology, it’s important to evaluate how you are breaking out your network IPs, subnets, etc. and isolating which incoming connections can actually be established with your application servers within your trusted network. You have your internal network and then you should have these tiers of protection and a firewall between those systems and the Internet.

So what does that look like? You have diagrams—have you done your architecture? Have you laid out your network topology? We don’t necessarily do a lot of network consulting, although most companies have done it already themselves because it’s a necessity. They want to enable Internet commerce with their internal applications, and that’s really where the rub is right now. Large companies have many internal backend systems for processing and generating invoicing. EDI [electronic data interchange] has been happening for many years now. Yet this idea of being able to take a credit card, and pass that credit card data to an online internal order processing system is scary. How protected is that transaction throughout the end-to-end lifecycle of business?

So, what other considerations should organizations be making? Number one, in light of the many breaches, you definitely want to be planning for network isolation and internal security as well as external security measures that ensure customer data is protected throughout the life of the transaction. TxMQ can help you with this process and this is where an architecture-review service comes in. We review the transaction lifespan and what topology it’s crossing, and through our security management practice, which is part of our ITIL group, we’ll take a look at what things you’ve done, what things you need to have in place, and where are you in this continuum from minimum security to high security.

So those are areas that we do consult with our customers and help them to develop a plan and start implementing some of the changes that are needed to bring them into that level of security, where they can feel comfortable and customers can feel comfortable that standards are being met and they can feel assured that their transactions are fully secure.

 

TXMQ’S WEBSITE MENTIONS 21ST CENTURY INFORMATION MANAGEMENT. CAN YOU EXPLAIN WHAT THE CONCEPT IS AND HOW ORGANIZATIONS CAN BENEFIT FROM THIS APPROACH?

Gregoire: We’re talking about is the way IT was done in the 20th century and how information management in the 21st century is different—and clearly there are challenges for organizations to adopt to the new paradigm. If you were starting up a brand new business today, you would implement an environment using new technology. New companies are far better off from the IT perspective working from that paradigm because everything you’re working with is new, it’s going to be integrated, and it’s going to have all the up-to-date standards, etc.

But TxMQ customers, many of whom have been in business for 20 years or more, are facing challenges including managing the old technology that doesn’t have all of the required capability and trying to introduce all the new bells, whistles, and capabilities, which are also key business drivers in the 21st century, on behalf of the business.

So the question is, what happens when you have all this old technology that works just fine and was designed for applications to run inside a technical environment that was not initially Internet-enabled? In order to compete and relate with customers, suppliers and service providers that are using social media and mobile devices, you have to maintain your old systems and also integrate with all the new security requirements.
The point of talking about 21st-century information management is the fact that, number one, there is a whole new way of doing business now because of the Internet and e-commerce. And two, you have all these mature companies that have been in business since before the year 2000 and their systems were never really designed to be able to deliver the types of services that 21st-century systems are delivering without some drastic investment. Not only do established businesses need to maintain existing systems that are working, but they also need to consider their integration and go-forward strategy for embracing the new. That’s the only effective way to compete with companies that are embracing the 21st-century information management strategies we are seeing in today’s marketplace.

 

IT’S ABOUT ANTICIPATING NEW OPPORTUNITIES

Gregoire: It’s taking a look at what’s coming up next. For example, big data is hitting everybody. Big data is all about having to house terabytes of data. Twenty years ago, the word terabyte didn’t exist in the common language at all because even the idea of a gigabyte was a phenomenal amount of data. So here we are talking about larger amounts of data and doing analytics in order to determine things like what people are buying right now and which demographics of people are living the longest. All of this really leads into a different mentality than what we had before the turn of the century in terms of our information technology systems and strategy. Are we managing our applications and our systems or are we managing our data as business intelligence or are we able to manage both? In terms of information technology and the management structure, you have all of these new roles—from CTO, CIO, CSO, data managers, etc.—roles that were previously combined into one single role, but are now each fulltime jobs with their own focus that are supported by a number of vendor products, suppliers, and service providers that all require management oversight. Or, how about rolling out a mobile strategy for your business? As a business, you want to relate to how to sell things to people. You can’t ignore social media anymore; you can’t ignore smartphones anymore. You have to have a complete plan in order to continue doing business in the 21st century.

 

A LOT OF COMPANIES HAVE SMALL IT BUDGETS; WHAT TOOLS CAN THEY LEVERAGE FOR MIDDLEWARE SUPPORT AND MANAGEMENT TO COMBAT THAT?

Gregoire: IT budgets have been flat since somewhere around 2009, maybe as early as 2007, and in much the same way that you might consider patching a roof because you can’t afford a complete replacement, the problems don’t go away just because you don’t have money or budget. It’s a quick fix, which will reoccur until the problem is solved.

When you look at what tools businesses can leverage, especially concerning middle management and support, take a look at what the business requirements are and then sell the need for additional funding to get the tools you require. The reality is that companies that are not putting enough money into their IT spending budgets are not going to survive the next five years. Competition is fierce.
I would have to go on the record and say that more companies are making the mistake of not paying attention to standardizing middleware management and support to help drive down those costs. As a result, they are spending a lot more money than needed because of the diversity of their middleware management tools. Companies are spending a lot of time troubleshooting and not getting a lot of results. These are companies who had a few copies of some very expensive tools or perhaps trial copies but no process in place to incorporate the use of these proper tools into the development lifecycle. Anything they do really goes back to their IT planning and being able to justify the need to the business management in terms of outage time and costs to not having the essential tools, resources and processes. We see that across the middleware environment when we’re doing health checks. They say, “We couldn’t get money for that, so we tried to do this instead.”

 

HOW DOES DEVELOPING A TECHNOLOGY ROADMAP FIT INTO THIS AND WHAT’S THE BENEFIT OF THIS APPROACH?

Gregoire: We hear that terminology used all the time. At the technology level, is your infrastructure current? And when you ask what does “current” mean, it means that if you installed software in 1999, have you applied all of the updates, new releases, patches, or even migrated to the latest version?
A technology roadmap places the infrastructure initiative at the same level as other application initiatives. So here are the underlying questions that should be asked when identifying a need for a technology roadmap and determining whether or not business applications are at risk.

  • Is the infrastructure current?
  • Most of my customers now have middleware roadmaps. What middleware are we running?
  • Are the applications running at vendor-supported levels?
  • Are the operating systems up-to-date?
  • Do the operating systems have certain security patches applied?
  • Are the applications connecting through to a database?
  • Is the database current? Is security-restricted access set up to that database?
  • Or, was everything installed using admin authority, where if hackers can crack that admin password, they gain access to everything in the network?

This is some of the content included in building out a technology roadmap. Developing a technology roadmap, if you don’t have one, is critical to the survival of your business. Otherwise, your technology is going to lag behind and what may be current now, will put you at risk tomorrow. Without following a roadmap you’re likely going to be more vulnerable, but even more than that, there may be more features and functions you’re not going to be able to offer to your customers. You may even be faced with instability issues.

WHAT ARE SOME TIPS FOR ORGANIZATIONS THAT WANT TO DEVELOP A TECHNOLOGY ROADMAP?

Gregoire: Start with what you have and what you do well, and develop a roadmap for the things that you know are most important to you. Carrying that forward, you can expand a technology roadmap to include some peripheral things. You’ll start off with a very specific technology roadmap and plan, and then start developing a plan for the network, mobile integration, and other components to the business. For every technology roadmap, you’re always going to be talking about how many resources it’s going to take to follow this roadmap, how much money it is going to take, and how much time.

 

WHEN YOU ARE WORKING WITH CLIENTS, HOW DO YOU PREPARE THEM FOR CONTINUED SUCCESS WHEN YOUR ENGAGEMENT IS COMPLETE?

Gregoire: We do what’s called a “project closeout.” Prior to that discussion with the customer, I have each consultant complete what’s called a “lessons learned.” Lessons learned is really just a document that helps me understand, from the consultant’s viewpoint, what went well and what didn’t go well. Then we complete the project closeout with the customer, with the idea of confirming whether we accomplished everything that was outlined in the statement of work and if there is something that should have been included that we didn’t do. What, if anything, could we have done better? The “project closeout” is something that I’ve found helps us identify other opportunities for the client where we might be able to assist or find resources.

White Paper: z/Linux Performance Configuration & Tuning for IBM® WebSphere® Compendium

TxMQ Staff Consultants Contributed To This Write-Up

Project Description

NOTE ON SOURCES/DOCUMENT PURPOSE

All guide sources come from well-documented IBM or IBM partner’s reference material. The reason for this document is simple: Take all relevant sources and put their salient points into a single, comprehensive document for reliable set-up and tuning of a z/Linux environment.
The ultimate point is to create a checklist and share best practices.

SKILLS NEEDED

Assemble a team that can address all aspects of the performance of the software stack. The following skills are usually required:

  • Overall Project Coordinator
  • VM Systems Programmer. This person set up all the Linux guests in VM.
  • Linux Administrator. This person installed and configured Linux.
  • WebSphere Administrator.
  • Lead Application Programmer. The person can answer questions about what the application does and how it does it.
  • Network Administrator.

PROCESS

TIP: Start from the outside and work inward toward the application.
The environment surrounding the application causes about half of the potential performance problems. The other half is caused by the application itself.
Start with the environment that the application runs in. This eliminates potential causes of performance problems. You can then work toward the application in the following manner.
1. LPAR Things to look at: Number of IFLs, weight, caps, total real memory, memory allocation between cstore and xstore
2. VM Things to look at: Communications configuration between Linux guests and other LPARs, paging space, share settings
3. Linux Things to look at: Virtual memory size, virtual CPUs, VM share and limits, swapping, swap file size, kernel tuning
4. WebSphere Things to look at: JVM heap size, connection pool sizes, use of caches. WebSphere application performance characteristics

LPAR CONFIGURATION

Defining LPAR resource allocation for CPU, memory, DASD, and network connections

LPAR WEIGHTS

Adjust depending on the environment (prod, test, etc…)
VIRTUAL MACHINE (VSWITCH) FOR ALL LINUX GUEST SYSTEMS A GOOD PRACTICE.
With VSWITCH, the routing function is handled directly by the virtual machine’s (VM’s) Control Program instead of the TCP/IP machine. This can help eliminate most of the CPU time that was used by the VM
router it replaces, resulting in a significant reduction in total system CPU time.
– When a TCP/IP VM router was replaced with VSwitch, decreases ranging from 19% to 33% were observed.
– When a Linux router was replaced with VSwicth, decreases ranging from 46% to 70% were observed.
NOTE: The security of VSwitch is not equal to a dedicate firewall or an external router, so when high security is required of the router function, consider using those instead of VSwitch.

Z/VM VSWITCH LAN

Configuration resulted in higher throughput than the Guest LAN feature.

GUEST LAN

Guest LAN is ring based. It can be much simpler to configure and maintain.

HIPERSOCKETS FOR LPAR-LPAR COMMUNICATION

Tips for Avoiding Eligible Lists:

  • Set each Linux machines virtual-storage size only as large as it needs to be to let the desired Linux application(s) run. This suppresses the Linux guest’s tendency to use its entire address space for file cache. If the Linux file system is hit largely by reads, you can make up for this with minidisk cache (MDC). Otherwise, turn MDC off, because it induces about an 11-percent instruction-path-length penalty on writes, consumes storage for the cached data, and pays off little because the read fraction isn’t high enough.
  • Use whole volumes for VM paging instead of fractional volumes. In other words, never mix paging I/O and non-paging I/O on the same pack.
  • Implement a one-to-one relationship between paging CHPIDs and paging volumes.
  • Spread the paging volumes over as many DASD control units as possible.
  • Turn on thw paging control units of they support non-volatile storage (NVS) or DASD fast write (DASDFW), (applies to RAID devices).
  • Provide at least twice as much DASD paging space (CP QUERY ALLOC PAGE) as the sum of the Linux guests’ virtual storage sizes.
  • Having at least one paging volume per Linux guest is a great thing. If the Linux guest is using synchronous page faults, exactly one volume per Linux guest will be enough. If the guest is using asynchronous page faults, more than one per guest might be appropriate; one per active Linux application will serve the purpose.
  • In queued direct I/O (QDIO)-intensive environments, plan that 1.25MB per idling real QDIO adapter will be consumed out of CP below-2GB free storage, for CP control blocks (shadow queues). If the adapter is being driven very hard, this number could rise to as much as 40MB per adapter. This tends to hit the below-2 GB storage pretty hard. CP prefers to resolve below-2GB contention by using expanded storage (xstore).
  • Consider configuring at least 2GB to 3GB of xstore to back-up the below-2GB central storage, even if central storage is otherwise large.
  • Try CP SET RESERVE to favor storage use toward specific Linux guests.

VM CONFIGURATION

Memory Management and Allocation
Add 200-256MB for WebSphere overhead per guest.
Configure 70% of real memory as central storage (cstore).
Configure 30% of real memory as expanded storage (xstore). Without xstore VM must page directly to DASD, which is much slower than paging to xstore.
CP SET RESERVED. Consider reserving some memory pages for one particular Linux VM, at the expense of all others. This can be done with a z/VM command (CP SET RESERVED).
If unsure, a good guess at VM size is the z/VM scheduler’s assessment of the Linux guest’s working set size.
Use whole volumes for VM paging instead of fractional volumes. In other words, never mix paging I/O and non-paging I/O on the same pack.
Implement a one-to-one relationship between paging CHPIDs and paging volumes.
Spread the paging volumes over as many DASD control units as you can.
If the paging control units support NVS or DASDFW, turn them on (applies to RAID devices).
CP QUERY ALLOC PAGE. Provide at least twice as much DASD paging space as the sum of the Linux guests’ virtual storage sizes.
Having at least one paging volume per Linux guest is beneficial. If the Linux guest is using
synchronous page faults, exactly one volume per Linux guest will be enough. If the guest is using asynchronous page faults, more than one per guest may be appropriate; one volume per active Linux application is realistic.
In memory over commitment tests with z/VM, increasing the memory over commitment up to a ratio of 3.2:1 occurred without any throughput degradation.
Cooperative Memory Management (CMM1) and Collaborative Memory Management (CMM2) both regulate Linux memory requirements under z/VM. Both methods improve performance when z/VM hits a system memory constraint.
Utilizing Named Saved Segments (NSS), the z/VM hypervisor makes operating system code in shared real memory pages available to z/VM guest virtual machines. With this update, multiple Red Hat Enterprise Linux guest operating systems on the z/VM can boot from the NSS and be run from a single copy of the Linux kernel in memory. (BZ#474646)
Expanded storage for VM. Here are a few thoughts on why:
While configuring some xstore may result in more paging, it often results in more consistent or better response time. The paging algorithms in VM evolved around having a hierarchy of paging devices. Expanded storage is the high speed paging device and DASD the slower one where block paging is completed. This means expanded storage can act as a buffer for more active users as they switch slightly between working sets. These more active users do not compete with users coming from a completely paged out scenario.
The central versus expanded storage issue is related to the different implementations of LRU algorithms used between stealing from central storage and expanded storage. In short, for real storage, you use a reference bit, which gets reset fairly often. While in expanded storage, you have the luxury of having an exact timestamp of a block’s last use. This allows you to do a better job of selecting pages to page out to DASD.
In environments that page to DASD, the potential exists for transactions (as determined by CP) to break up with the paging I/O. This can cause a real-storage-only configuration to look like the throughput rate is lower.
Also configure some expanded storage, if needed, for guest testing. OS/390, VM, and Linux can all use expanded storage.

VM SCHEDULER RESOURCE SETTINGS

Linux is a long-running virtual machine and VM, by default, is set up for short-running guests. This means that the following changes to the VM scheduler settings should be made. Linux is a Q3 virtual machine, so changing the third value in these commands is most important. Include these settings in the profile exec for the operator machine or autolog1 machine:
set srm storbuf=300,200,200
set srm ldubuf=100,100,100
Include this setting in the PROFILE EXEC for the operator machine or AUTOLOG1 machine.

DO I NEED PAGING SPACE ON DASD?

YES. One of the most common mistakes with new VM customers is ignoring paging space. The VM system, as shipped, contains enough page space to get the system installed and running some small trial work. However, you should add DASD page space to do real work. The planning and admin book has details on determining how much space is required.
Here are a few thoughts on page space:
If the system is not paging, you may not care where you put the page space. However, sooner or later the system grows to a point where it pages and then you’ll wish you had thought about it before this happens.
VM paging is most optimal when it has large, contiguous available space on volumes that are dedicated to paging. Therefore, do not mix page space with other space (user, t-disk, spool, etc.).
A rough starting point for page allocation is to add up the virtual machine sizes of virtual servers running and multiple by 2. Keep an eye on the allocation percentage and the block read set size.
See: Understanding poor performance due to paging increases

USER CLASSES AND THEIR DESCRIPTIONS

If you have command privilege class E, issue the following CP command to view information about these classes of user: INDICATE LOAD

MINI-DISKS

A minimal Linux guest system fits onto a single 3390-3 DASD, and this is the recommended practice in the field. This practice requires that you do not use GNOME or KDE window managers in order to retain the small size of the installed system. (The example does not do this because we want to show the use of LVM and KDE).

VM SHARED KERNEL SUPPORT

If your Linux distribution supports the “VM shared kernel support” configuration option, the Linux kernel can be generated as a shareable NSS (named saved system). Once this is done, any VM users can IPL LXSHR and about 1.5M of the kernel is shared among all users. Obviously, the greater number of Linux virtual machines running, the greater the benefit of using the shared system.

QUICKDSP

Makes a virtual machine exempt from being held back in an eligible list during scheduling when system memory and/or paging resources are constrained. Virtual machines with QUICKDSP set on go directly to the dispatch queue and are identified as Q0 users. We prefer that you control the formation of eligible lists by tuning the CP SRM values and allowing a reasonable over-commitment of memory and paging resources, rather than depending on QUICKDSP.

LINUX CONFIGURATION

LINUX GUESTS
VCPU
Defined with an assigned number of virtual CPs, and a SHARE setting that determines each CP’s share of the processor cycles available to z/VM.
MEMORY
When running WebSphere applications in Linux, you are typically able to over-commit memory at a
1.5/1 ratio. This means for every 1000 MB of virtual memory needed by a Linux guest, VM needs to
have only 666 MB of real memory to back that up. This ratio is a starting point and needs to be adjusted based on experience with your workload.

LINUX SWAP – WHERE SHOULD LINUX SWAP?

TIPS:
Try to avoid swapping in Linux whenever possible. It adds path length and causes a significant hit to response time. However, sometimes swapping is unavoidable. If you must swap, these are some pointers:
Prefer swap devices over swap files.
Do not enable MDC on Linux swap Mini-Disks. The read ratio is not high enough to overcome the write overhead.
We recommend a swap device size approximately 15% of the VM size of the Linux guest. For example, a 1 GB Linux VM should allocate 150 MB for the swap device.
Consider multiple swap devices rather than a single, large VDISK swap device. Using multiple swap devices with different priorities can alleviate stress on the VM paging system when compared to a single, large VDISK.
Linux assigns priorities to swap extents. For example, you can set up a small VDISK with higher priority (higher numeric value) and it will be selected for swap as long as there is space on the VDISK to contain the process being swapped. Swap extents of equal priority are used in round-robin fashion. Equal prioritization can be used to spread swap I/O across chpids and controllers, but if you are doing this, be careful not to put all the swap extents on Mini-Disks on the same physical DASD volume. If you do, you will not be accomplishing any spreading. Use swapon-p… to set swap extent priorities.

VDISK VS. DASD

The advantage of VDISK is that a very large swap area can be defined at very little expense. The VDISK is not allocated until the Linux server attempts to swap. Swapping to VDISK with the DIAGNOSE access method is faster than swapping to DASD or SCSI disk. In addition, when using a VDISK swap device, your z/VM performance management product can report swapping by a Linux guest.

DCSS

Swapping to DCSS is the fastest known method. As with VDISK, the solution requires memory. But lack of memory is the reason for swapping. So it could preferably be used as a small fast swap device in peak situations. The DCSS swap device should be the first in a cascade of swap devices, where the following could be bigger and slower (real disk). The swapping to DCSS adds complexity.
Create an EW/EN DCSS and configure the Linux guest to swap to the DCSS. This technique is useful for cases where the Linux guest is storage-constrained but the z/VM system is not. The technique lets the Linux guest dispose of the overhead associated with building channel programs to talk to the swap device. For one illustration of the use of swap-to-DCSS, read the paper here.

DEDICATED VOLUME

If the storage load on your Linux guest is large, the guest might need a lot of room for swap. One way to accomplish this is simply to ATTACH or DEDICATE an entire volume to Linux for swapping. If you have the DASD to spare, this can be a simple and effective approach.

TRADITIONAL MINIDISK

Using a traditional Mini-Disk on physical DASD requires some setup and formatting the first time and whenever changes in size of swap space are required. However, the storage burden on z/VM to support Mini-Disk I/O is small, the controllers are well-cached, and I/O performance is generally very good. If you use a traditional Mini-Disk, you should disable z/VM Mini-Disk Cache (MDC) for that Mini-Disk (use MINIOPT NOMDC statement in the user directory).

VM T-DISK

A VM temporary disk (t-disk) could be used. This lets one define disks of various sizes with less consideration for placement (having to find ‘x’ contiguous cylinders by hand if you don’t have DIRMAINT or a similiar product). However, t-disk is temporary, so it needs to be configured (perhaps via PROFILE EXEC) whenever the Linux VM logs on. Storage and performance benefits of traditional Mini-Disk I/O apply. If you use a t-disk, you should disable Mini-Disk cache for that Mini-Disk.

VM V-DISK

A VM virtual disk in storage (VDISK) is transient like a t-disk is. However, VDISK is backed by a memory address space instead of by real DASD. While in use, VDISK blocks reside in central storage (which makes it very fast). When not in use, VDISK blocks can be paged out to expanded storage or paging DASD. The use of VDISK for swapping is sufficiently complex, so reference this separate tips page.

XPRAM

Attach expanded storage to the Linux guest and allow it to swap to this media. This can give good performance if the Linux guest makes good use of the memory, but it can waste valuable memory if Linux uses it poorly or not at all. In general, this is not recommended for use in a z/VM environment.

WEBSPHERE CONFIGURATION

GC POLICY SETTINGS

The -Xgcpolicy options have these effects:

OPTTHRUPUT/

Disables concurrent mark. If you do not have pause time problems (as seen by erratic application response times), you get the best throughput with this option. Optthruput is the default setting.

OPTAVGPAUSE

Enables concurrent mark with its default values. If you are having problems with erratic application response times that are caused by normal garbage collections, you can reduce those problems at the cost of some throughput, by using the optavgpause option.

GENCON

Requests the combined use of concurrent and generational GC to help minimize the time that is spent in any garbage collection pause.

SUBPOOL

Disables concurrent mark. It uses an improved object allocation algorithm to achieve better performance when allocating objects on the heap. This option might improve performance on SMP systems with 16 or more processors. The subpool option is available only on AIX®, Linux® PPC and zSeries®, z/OS®, and i5/OS®.

ANY FORM OF CACHING

Resulted in a significant throughput improvement over the no caching case, where Distributed map caching generated the highest throughput improvement.

DYNACACHE DISK-OFF LOAD

Interesting feature meant to significantly improve the performance with small caches without additional CPU cost.

PERFORMANCE TUNING FOR WEBSPHERE

The following recommendations from the Washington Systems Center can improve the performance of your WebSphere applications:

    • Use the same value for StartServers, MaxClients, and MaxSpareServers parameters in the httpd.conf file.
      Identically defined values avoid starting additional servers as workload increases. The HTTP server error log displays a message if the value is too low. Use 40 as an initial value.
    • Serve image content (JPG and GIF files) from the IBM HTTP Server (IHS) or Apache
      Web server.
      Do not use the file serving servlet in WebSphere. Use the DocumentRoot and
      directives, or the ALIAS directive to point to the image file directory.
    • Cache JSPs and Servlets using the servletcache.xml file.
      A sample definition is provided in the servletcache.sample.xml file. The URI defined in the
      servletcache.xml must match the URI found in the IHS access log. Look for GET statements, and a definition for each for each JSP or servlet to cache.
    • Eliminate servlet reloading in production.
      Specify reloadingEnabled=”false” in the ibm-web-ext.xml file located in the application’s
      WEB-INF subdirectory.
    • Use Resource Analyzer to tune parameter settings.
      Additionally, examine the access, error, and native logs to verify applications are functioning correctly.
    • Reduce WebSphere queuing.
      To avoid flooding WebSphere queues, do not use an excessively large MaxClients value in the httpd.conf file. The Web Container Service General Properties MAXIMUM THREAD SIZE value should be two-thirds the value of MaxClients specified in the httpd.conf file. The Transport MAXIMUM KEEP ALIVE connections should be five more than the MaxClients value.

PERFORMANCE MONITORING

CP COMMANDS
COMMAND-LINE

      • vmstat
      • sysstat package with sadc, sar, iostat
      • dasd statistics
      • SCSI statistics
      • netstat
      • top

z/VM Performance Toolkit. PERFORMANCE TOOLKIT FOR VM, SG24-6059
THE WAS_APPSERVER.PL SCRIPT
This perl script can help determine application memory usage. It displays memory used by WebSphere as well as memory usage for active WebSphere application servers. Using the Linux ps command, the script displays all processes containing the text “ActiveEJBServerProcess” (the WebSphere application server process). Using the RSS value for these processes, the script attempts to identify the amount of memory used by WebSphere applications.