Generating OpenAPI or Swagger From Code is an Anti-Pattern, and Here’s Why

(This article was originally posted on Medium.)

I’ve been using Swagger/OpenAPI for a few years now, and RAML before that. I’m a big fan of these “API documentation” tools because they provide a number of additional benefits beyond simply being able to generate nice-looking documentation for customers and keep client-side and server-side development teams on the same page. However, many projects fail to fully realize the potential of OpenAPI because they approach it the way they approach Javadoc or JSDoc: they add it to their code, instead of using it as an API design tool.

Here are five reasons why generating OpenAPI specifications from code is a bad idea.

You wind up with a poorer API design when you fail to design your API.

You do actually design your API, right? It seems pretty obvious, but in order to produce a high-quality API, you need to put in some up-front design work before you start writing code. If you don’t know what data objects your application will need or how you do and don’t want to allow API consumers to manipulate those objects, you can’t produce a quality API design.

OpenAPI gives you a lightweight, easy to understand way to describe what those objects are at a high level and what the relationships are between those objects without getting bogged down in the details of how they’ll be represented in a database. Separating your API object definitions from the back-end code that implements them also helps you break another anti-pattern: deriving your API object model from your database object model. Similarly, it helps you to “think in REST” by separating the semantics of invoking the API from the operations themselves. For example, a user (noun) can’t log in (verb), because the “log in” verb doesn’t exist in REST — you’d create (POST) a session resource instead. In this case, limiting the vocabulary you have to work with results in a better design.

It takes longer to get development teams moving when you start with code.

It’s simply quicker to rough out an API by writing OpenAPI YAML than it is to start creating and annotating Java classes or writing and decorating Express stubs. All it takes to generate basic sample data out of an OpenAPI-generated API is to fill out the example property for each field. Code generators are available for just about every mainstream client and server-side development platform you can think of, and you can easily integrate those generators into your build workflow or CI pipeline. You can have skeleton codebases for both your client and server-side plus sample data with little more than a properly configured CI pipeline and a YAML file.

You’ll wind up reworking the API more often when you start with code.

This is really a side-effect of #1, above. If your API grows organically from your implementation, you’re going to eventually hit a point where you want to reorganize things to make the API easier to use. Is it possible to have enough discipline to avoid this pitfall? Maybe, but I haven’t seen it in the wild.

It’s harder to rework your API design when you find a problem with it.

If you want to move things around in a code-first API, you have to go into your code, find all of the affected paths or objects, and rework them individually. Then test. If you’re good, lucky, or your API is small enough, maybe that’s not a huge amount of work or risk. If you’re at this point at all, though, it’s likely that you’ve got some spaghetti on your hands that you need to straighten out. If you started with OpenAPI, you simply update your paths and objects in the YAML file and re-generate the API. As long as your tags and operation Ids have remained consistent, and you’ve used some mechanism to separate hand-written code from generated code, all you’re left to change is business logic and the mapping of the API’s object model to its representation in the database.

The bigger your team, the more single-threaded your API development workflow becomes.

In larger teams building in mixed development environments, it’s likely you have people who specialize in client-side versus server-side development. So, what happens when you need to add to or change your API? Well, typically your server-side developer makes the changes to the API before handing it off to the client-side developer to build against. Or, you exchange a few emails, each developer goes off to do his own thing, and you hope that when everyone’s done that the client implementation matches up with the server implementation. In a setting where the team reviews the proposed changes to the API before moving forward with implementation, you’re in a situation where code you write might be thrown away if the team decides to go in a different direction than the developer proposed.

It’s easy to avoid this if you start with the OpenAPI definition. It’s faster to sketch out the changes and easier for the rest of the team to review. They can read the YAML, or they can read HTML-formatted documentation generated from the YAML. If changes need to be made, they can be made quickly without throwing away any code. Finally, any developer can make changes to the design. You don’t have to know the server-side implementation language to contribute to the API. Once approved, your CI pipeline or build process will generate stubs and mock data so that everyone can get started on their piece of the implementation right away.

The quality of your generated documentation is worse.

Developers are lazy documenters. We just are. If it doesn’t make the code run, we don’t want to do it. That leads us to omit or skimp on documentation, skip the example values, and generally speaking weasel out of work that seems unimportant, but really isn’t. Writing OpenAPI YAML is just less work than decorating code with annotations that don’t contribute to its function.

Upgrading For Federal Reserve Bank MQ Integration

For financial institutions using WebSphere MQ V6.x or older to communicate with the Federal Reserve Bank, your time is running out… and fast. WebSphere MQ V6.x went End of Support on September 30th, 2012 and the Federal Reserve Bank is requiring upgrades to a current, supported version.

WebSphere MQ V7.0.0 and V7.0.1 went End of Support at the end of September 2015. For those of you wondering about WebSphere MQ V7.1, you might want to reconsider. While still technically supported by IBM, V7.1 is likely looking at another upgrade within the next 18 to 24 months.
From a business perspective, I recommend financial institutions upgrade to IBM MQ V8.x. At the very least, you could consider MQ V7.5.x.

IBM MQ V8.x, rebranded from WebSphere MQ is now full equipped with fix pack 3 and has proven to be very stable.

Further, MQ V7.5.0.5 and MQ V8.0.0.3, have deprecated SSLv3 connections due to the Poodle Exploit and reduced the number of supported Cipher Suites. (It is critical you understand the supported connection protocols and Cipher Suites supported by the Federal Reserve Bank.)
But MQ upgrades are only half the story.

If you haven’t been keeping pace with MQ updates, you’ve likely not been keeping pace with OS either. OS updates to supported versions are arguably equally as critical as MQ.

Interested in learning more about your options? Let’s start a conversation. Reach out to TxMQ today.

The Business Value of SOA (or How to convince your line-of-business executive that integration is worth the cost)

In this digital economy, your company cannot afford to be hindered by inflexible IT. As we throw around phrases like “vendor lock-in” or “vendor consolidation,” the truth is, successful business thrives on harnessing the power of the next big thing. Sure, you might have one option today, but tomorrow you could have one hundred or one thousand. That’s why integration strategies are vital to both business and IT growth. One way to ensure your multiple programs, systems and applications integrate effectively is through SOA.

Service Oriented Architecture (SOA) Defined:
To an IT professional, SOA integrates all your systems, so that different programs and applications, all from different vendors and running on different machines, can communicate smoothly and effectively. This also means that your legacy infrastructures can coexist with your new cloud services.
To a business executive, SOA creates a more competitive business edge by improving the efficiency of collaboration between business processes and IT. SOA drives growth by boosting productivity, enhancing performance and eliminating frustrations with IT.

LOB executives command the budget; therefore, they wield ultimate decision-making power when it comes to purchasing the software and hardware to meet your IT needs. Engineers, developers and IT talent inhabit the nooks and crannies, so they must prove that these nuts and bolts translate into profits. Looking to convince your LOB executives to open the coffers a bit wider? Here are a few ways to add some business value lingo into your tech-heavy talk.
Bottom Line Value of SOA:

  • Integrates with current infrastructure: SOA means you can keep your mainframe and leverage existing legacy applications. IT developers can build additional functionality without having to spend thousands rebuilding the entire infrastructure.
  • Decrease development costs: SOA breaks down an application into small, independently functional pieces, which can be reused in multiple applications, thus bringing down the cost of development.
  • Better scalability – Since location is no longer an issue, the service can be on-premise, in the cloud or both. SOA can run on different servers. This increases your company’s ability to scale up to service more customers, or scale down if consumer habits change.
  • Reduce maintenance and support costs – In the past SOA could get costly, but now services like IBM’s Enterprise Service Bus can bring down those operating costs significantly. New capabilities can be delivered quickly and more efficiently.

So after you’ve spent your lunch hour elegantly wooing your LOB executive, consider partnering with TxMQ to execute your SOA needs. We can help at any point in the process, from assessment to deployment, and even maintenance and support. Request your free, no obligation discovery session and see exactly what it will take to boost productivity and profitability with a more secure and agile integration of your various applications.

Upgrade Windows Server 2003 (WS2003) – Do It Today

Another day, another end-of-support announcement for a product: On July 14, 2015, Windows Server 2003 (WS2003) goes out of support.
Poof! Over. That’s the bad news.
What’s the upside? Well, there isn’t really an upside, but you should rest assured that it won’t stop working. Systems won’t crash, and software won’t stop running.
From the standpoint of security, however, the implications are rather more dramatic.
For starters, this automatically means that anyone running WS2003 will be noncompliant with PCI security standards. So if your business falls under these restrictions – and if your business accepts any credit cards, it certainly does – the clock is ticking. Loudly.
There’ll be no more security patches, no more technical support and no more software or content updates after July 14, 2015. Most importantly, this information is public. Hackers typically target systems they know to be out of support. The only solution, really, is to upgrade Windows Server 2003 today.
TxMQ consultants report that a large percentage of our customers’ systems are running on Windows Server, and some percentage of our customers are still on WS2003. There are no terms strong enough terms to reinforce the need to get in touch with TxMQ, or your support vendor, for an immediate plan to upgrade Windows Server 2003 and affected systems.
Server migrations oftentimes take up to 90 days, while applications can take up to 2 months. Frankly, any business running WS2003 doesn’t have 60 days to upgrade, let alone 90. So please make a plan today for your migration/upgrade.

IRS Get Transcript Breach – The Agency Didn't Adequately Prepare

The announcement came yesterday: Chinese hackers had breached the federal government’s personnel office. In isolation, this might seem a single event. But when viewed in the grouping of several other top-level hacks, it becomes clear that the federal government is extremely vulnerable.
One clear parallel was the recent IRS Get Transcript breach, announced in late May, which is believed to trace to the Soviet Union. The information was taken from an IRS website called Get Transcript, where taxpayers can obtain previous tax returns and other tax filings. In order to access the information, the thieves cleared a security screen that required detailed knowledge about each taxpayer, including their Social Security number, date of birth, tax-filing status and street address. The IRS believes the criminals originally obtained this information from other sources. They were accessing the IRS website to get even more information about the taxpayers, which would help them claim fraudulent tax refunds in the future. Might the information in the more recent hack also provide the fuel for a future hack? Quite likely, in my opinion.
What’s especially bothersome to me is the IRS had received several warnings from GAO in 2014 and 2015. If the warnings had been implemented, there would have been less of an opportunity for the attack. The IRS failed to implement dozens of security upgrades to its computer systems, some of which could have made it more difficult for hackers to use an IRS website to steal tax information from 104,000 taxpayers.
In addition, the IRS has a comprehensive framework for its cybersecurity program, which includes risk assessment for its systems, security-plan development, and the training of employees for security awareness and other specialized topics. However, the IRS did not correctly implement aspects of its program. The IRS faces a higher statistical probability of attacks, but was unprepared. Let’s face it: The US federal government is a prime target for hackers.
The concern here, of course, is the grouping of attacks and the reality that the US government must be more prepared. I’ve managed IT systems and architecture for more than 3 decades and I’ll say this: The IRS testing methodology wasn’t capable of determining whether the required controls were in effective operation. This speaks to not only physical unpreparedness, but a general passive attitude toward these types of events and the testing protocols. The federal government doesn’t adequately protect the PII it collects on all US citizens, and simply sending a letter to those impacted by a breach is not enough to prevent recurrence in the future.
I don’t need to tell you that. The GAO told the IRS the same thing: “Until IRS takes additional steps to (1)address unresolved and newly identified control deficiencies and (2)effectively implements elements of its information security program, including, among other things, updating policies, test and evaluation procedures, and remedial action procedures, its financial and taxpayer data will remain unnecessarily vulnerable to inappropriate and undetected use, modification, or disclosure.”
These shortcomings were the basis for GAO’s determination that IRS had a significant deficiency in internal control over financial-reporting systems prior to the IRS Get Transcript Breach.
Author Note: In my next blog on security, I’ll talk about the NIST standard for small businesses, with recommendations to prepare and protect in the wake of these high-level breaches.
(Photo by Ray Tsang)

What The Premera Breach Teaches Us About Enterprise Security

By TxMQ Middleware Architect Gary Dischner
No surprise to hear of yet another breach occurring – this time at Premera Blue Cross. The company became aware of a security breach on Jan. 29, 2015, but didn’t begin to notify anyone involved (including the state insurance board) until March 17, which was 6 weeks later. The actual attack took place in May 2014 and may affect 11 million customer records dating back to 2002.
As with many companies that experience a security breach, the excessive delays in first identifying and confirming that a breach has occurred, coupled with the typical delays in assessing and providing notification, subsequently led the state insurance board to fault Premera with untimely notification. A review of the HIPAA regulations for breach reporting indicates that a notification of those impacted absolutely needs to occur within 60 days. Many companies, including Premera, just aren’t equipped with the tools and security-management processes to handle these incidents. For Healthcare companies, HIPAA guidelines state that notification to the state insurance commissioner should be immediate for breaches involving more than 500 individuals. Consequently, Premera is now being sued by the state insurance commissioner.
A company found guilty of late notification should concern the public: There’s at least the appearance of a general lack of concern over both the impact and severity to its customers, partners and constituents. Blue Cross Premera has responded to its own behavior with efforts to protect itself and to cover up details of the incident, rather than be forthright with information so that those impacted can take the needed steps to protect themselves from further exposure and potential consequences, such as fraud and identify theft.
A secondary concern is the lack of security-management measures around protected data at many companies. In this case, the audit recommendations – which had been provided to Premera on Nov. 28, 2014 – found serious infractions in each of the following domains:

  • Security management
  • Access controls
  • Configuration management
  • Segregation of duties
  • Contingency planning
  • Application controls specific to Premera’s claims-processing systems
  • HIPAA compliance

More and more companies are being reminded of the data exposures and related risks, but remain slow to respond with corrective measures. Companies of high integrity will take immediate responsive measures and will openly express concern for the repercussions of the exposure. Companies that do not? They should be dealt with severely. Let this Premera example serve as the Anthem breach for companies that are holding sensitive data. As a customer or business partner, let them know you expect them to take every measure to protect your healthcare and financial information.
And in closing, let’s all take away a few lessons learned. Security assessments must become a regular operational function. Self-audits demonstrate a company’s high integrity and commitment to identifying process improvements for security management. Such efforts should be assessed quarterly with reports to the company board to make sure every vulnerability is remediated and customers who are working with the company are protected. After all, it’s only the company that can secure its own technical environments.
Photo by torbakhopper

Managed File Transfer: Your Solution Isn't Blowing In The Wind

If FTP were a part of nature’s landscape, the process would look a lot like a dandelion gone to seed. The seeds need to go somewhere, and all it takes is a bit of wind to scatter them toward some approximate destination.
Same thing happens on computer networks every day. We take a file, we stroke a key to nudge it via FTP toward some final destination, then turn and walk away. And that’s the issue with using FTP and SFTP to send files within the enterprise: The lack of any native logging and validation. Your files are literally blowing in the wind.
The popular solution is to create custom scripts to wrap and transmit the data. That’s why there’s typically a dozen or so different homegrown FTP wrappers in any large enterprise – each crafted by a different employee, contractor or consultant with a different skillset and philosophy. And even if the file transfers run smoothly within that single enterprise, the system will ultimately fail to deliver for a B2B integration. There’s also the headache of encrypting, logging and auditing financial data and personal health information using these homegrown file-transfer scripts. Yuck.
TxMQ absolutely recommends a managed system for file transfer, because a managed system:

  • Takes security and password issues out of the hands of junior associates and elevates data security
  • Enables the highest level of data encryption for transmission, including FIPS
  • Facilitates knowledge transfer and smooth handoffs within the enterprise (homegrown scripts are notoriously wonky)
  • Offers native logging, scheduling, success/failure reporting, error checking, auditing and validation
  • Integrates with other business systems to help scale and grow your business

TxMQ typically recommends and deploys IBM’s Managed File Transfer (MFT) in two different iterations: One as part of the Sterling product stack, the other as an extension on top of MQ.
When you install MFT on top of MQ, you suddenly and seamlessly have a file-transfer infrastructure with built-in check-summing, failure reporting, audit control and everything else mentioned above. All with lock-tight security and anybody-can-learn ease of use.
MFT as part of the Sterling product stack delivers all those capabilities to an integrated B2B environment, with the flexibility to quickly test and scale new projects and integrations, and in turn attract more B2B partners.
TxMQ is currently deploying this solution and developing a standardization manual for IBM MFT. Are you worried about your file transfer process? Do you need help trading files with a new business partner? The answer IS NOT blowing in the wind. Contact us today for a free and confidential scoping.
Photo by Alberto Bondoni.

IBM WAS Enhancements Deliver Internet-Scale Clustering For Applications

IBM recently announced enhancements to its WebSphere Application Server (IBM WAS) version 8.5.5 that deliver more functionality and services to the Liberty and full profiles. The enhancements are geared toward both development and production environments and are said to provide “significant enhancements in terms of developer experience and high-end resiliency.”
The features can largely be installed optionally from the WebSphere Liberty Repository and used in conjunction with features previously available and active.
Developers now have new programming models and tools. The result: A better developer climate that should result in a more rapid pace of application deployments. Administrators and businesses can leverage new Intelligent Management and security features to lower the administrative overhead of managing, scaling, and securing servers.
WAS in general is gearing itself more and more toward cloud and mobile development and deployment, hence the rollout of these new features.
Specific enhancements to WebSphere Liberty include:

  • Java EE 7-compliant programming model support for WebSockets 1.0 (JSR 356) and Servlet 3.1 (JSR 340) to enrich applications with responsive dynamic content
  • Additional Java EE 7 components in support of APIs for processing JSON (JavaScript] Object Notation) data (JSR 353) and Concurrency utilities (JSR 236)
  • Auto-scaling capabilities to dynamically adjust the number of Java virtual machines (JVMs) that are based on workload and auto-routing to intelligently manage your workload
  • Improved operational efficiency of large-scale, clustered deployments of tens of thousands of Java virtual machines (JVMs) in a Liberty Collective
  • Configurable, global Web Service handlers for extending and customizing payloads to Web Service applications
  • REST connector for non-Java clients to extend client access to Java Management Extensions (JMX) administration infrastructure through a RESTful API.
  • Simplified configuration processing for feature developers to enable customization of WebSphere Application Server Liberty profile capabilities
  • Enhancement to the distributed security model using OpenID and OpenID Connect to simplify the task of authenticating users across multiple trust domains
  • Enhancement to WebSphere Liberty Administrative Center for usability and management of large collectives of application servers
  • Enhancement to WebSphere Application Server Migration Toolkit – Liberty Tech
  • Preview includes new binary scanning capability to quickly evaluate applications for rapid deployment on WebSphere Liberty.

TxMQ is an IBM Premier Business Partner and we specialize in WebSphere. For additional information about IBM WAS and all WebSphere-related matters, contact president Chuck Fried: 716-636-0070 x222, chuck@TxMQ.com.
TxMQ recently introduced its MQ Capacity Planner – a new solution developed for performance-metrics analysis of enterprise-wide WebSphere MQ (now IBM MQ) infrastructure. TxMQ’s innovative technology enables MQ administrators to measure usage and capacity of an entire MQ infrastructure with one comprehensive tool. Visit our MQ Capacity Planner product page.
 

POODLE Vulnerability In SSLv3 Affects IBM WebSphere MQ

Secure Socket Layer version 3 (SSLv3) is largely obsolete, but some software does occasionally fall back to this version of SSL protocol. The bad news is that SSLv3 contains a vulnerability that exposes systems to a potential attack. The vulnerability is nicknamed POODLE, which stands for Padding Oracle On Downgraded Legacy Encryption.

The vulnerability does affect IBM WebSphere MQ because SSLv3 is enabled by default in MQ.
IBM describes the vulnerability like this: IBM WebSphere MQ could allow a remote attacker to obtain sensitive information, caused by a design error when using the SSLv3 protocol. A remote user with the ability to conduct a man-in-the-middle attack could exploit this vulnerability via a POODLE (Padding Oracle On Downgraded Legacy Encryption) attack to decrypt SSL sessions and access the plaintext of encrypted connections.”

The vulnerability affects all versions and releases of IBM WebSphere MQ, IBM WebSphere MQ Internet Pass-Thru and IBM Mobile Messaging and M2M Client Pack.

To harden against the vulnerability, users should disable SSLv3 on all WebSphere MQ servers and clients and instead use the TLS protocol. More specifically, WebSphere MQ channels select either SSL or TLS protocol from the channel cipherspec. The following cipherspecs are associated with the SSLv3 protocol and channels that use these should be changed to use a TLS cipherspec:
AES_SHA_US
RC4_SHA_US
RC4_MD5_US
TRIPLE_DES_SHA_US
DES_SHA_EXPORT1024
RC4_56_SHA_EXPORT1024
RC4_MD5_EXPORT
RC2_MD5_EXPORT
DES_SHA_EXPORT
NULL_SHA
NULL_MD5
FIPS_WITH_DES_CBC_SHA
FIPS_WITH_3DES_EDE_CBC_SHA

On UNIX, Linux, Windows and z/OS platforms, FIPS 140-2 compliance mode enforces the use of TLS protocol. A summary of MQ cipherspecs, protocols and FIPS compliance status can be found here.

On the IBM i platform, use of the SSLv3 protocol can be disabled at a system level by altering the QSSLPCL system value. Use Change System Value (CHGSYSVAL) to modify the QSSLPCL value, changing the default value of *OPSYS to a list that excludes *SSLV3. For example: *TLSV1.2, *TLSV1.1, TLSV1.

TxMQ is an IBM Premier Business Partner and “MQ” is part of our name. For additional information about this vulnerability and all WebSphere-related matters, contact president Chuck Fried: 716-636-0070 x222, chuck@TxMQ.com.

TxMQ recently introduced its MQ Capacity Planner – a new solution developed for performance-metrics analysis of enterprise-wide WebSphere MQ (now IBM MQ) infrastructure. TxMQ’s innovative technology enables MQ administrators to measure usage and capacity of an entire MQ infrastructure with one comprehensive tool.
(Photo from J Jongsma)