RTP Admin: Scenarios DLQ and Handler with IBM MQ

RTP

This article was developed with the intention to help IBM MQ Administrators gain a better understanding of Dead Letter Queues (DLQ) and DLQ Handlers for RTP administration. It provides basic scenarios and explanations within IBM MQ.

Becoming a Real Time Payments (RTP) “participant” has numerous challenges.  For many financial entities, this is their first exposure to IBM MQ as a messaging system, which is a requirement to join the network.

The Clearing House’s RTP represents the next evolution of payments innovation for instant transfer of funds and confirmation. Financial institutions, big and small, are offering Real Time Payments to their customers to stay competitive.

 

IBM MQ and RTP

At its fundamentals, the RTP network was built upon the concept of MQ request/reply.  For example, a participant sends a request message following the ISO 20022 XML standard.

MQ request/reply Example

The overall lifespan of the “transaction” is fifteen (15) seconds, but the “hops” between the participant to RTP to participant and back are two (2) seconds each between its respective queue manager.  The Clearing House’s RTP documentation provides a chart to explain further, and is outside the narrative of this article.

When everything works through “happy path”, it’ll be as shown above: the participant app sends a request message and awaits a response within the 15 second window.

But what happens if something goes wrong with the reply message being placed on RESPONSE.FROM.RTP1?

 

THE DEAD LETTER QUEUE AND THE DEAD LETTER HANDLER

THE DEAD LETTER QUEUE AND THE DEAD LETTER HANDLER

A feature of IBM MQ is the assured delivery of the messages to the destination. There can be number of scenarios where MQ may not be able to deliver messages to the destination queue, and all those are routed to the Dead Letter Queue (DLQ) defined for the queue manager.

Before placing message to the DLQ, IBM MQ attaches Dead Letter Header (DLH) to each message which contains all the required information to identify why the message was placed in DLQ and what was the intended destination. This DLH plays a very important role while handling the messages on DLQ.

THE DEAD LETTER QUEUE AND THE DEAD LETTER HANDLER 2

A feature of IBM MQ is the assured delivery of the messages to the destination. There can be number of scenarios where MQ may not be able to deliver messages to the destination queue, and all those are routed to the Dead Letter Queue (DLQ) defined for the queue manager.

Before placing message to the DLQ, IBM MQ attaches Dead Letter Header (DLH) to each message which contains all the required information to identify why the message was placed in DLQ and what was the intended destination. This DLH plays a very important role while handling the messages on DLQ.

THE DEAD LETTER QUEUE AND THE DEAD LETTER HANDLER 3

IBM has provided a Dead Letter Handler utility called “runmqdlq”.  Its purpose is to handle messages which are placed on the DLQ. With it, the routine takes necessary action on the messages placed on the DLQ.  A rule table can be created with required control data and rules and the same can be used as an input to the Dead Letter Handler.

 

TYPES OF DEAD LETTER MESSAGES

In a nutshell, messages placed on the DLQ are in two categories:
– Replayable Messages
– Non-RePlayable Messages

For Replayable Messages, these are placed on the DLQ due to some temporary issues like the destination queue is full or the destination queue is put disabled etc. These messages can be replayed without any further investigation. Also, re-playable messages can be performed by using dlqhandler.

For Non-Replayable Messages, these are placed on the DLQ due to issues which are not temporary in nature (incorrect destination, data conversion errors, mqrc_unknown_reason). These messages require further investigation to identify the cause on why those were placed on the DLQ. Replaying this category of messages with dlqhandler most likely will again arrive on DLQ. Therefore, they would require an MQ Admin to investigate further.

 

THE DLQ HANDLER SETUP

Replayable messages can be run again (reprocessed) by using the dlqhandler. In short, the dlqhandler is setup as follows:
1) Creating the dlqhandler rule file
2) Starting the dlqhandler

The rule file is a simple text-based flat file with configuration instructions as to how a message should be handled.

 

COMMON SCENARIOS

A number of scenarios are possible, based on the design of MQ infrastructure at an organization, but the most common scenarios are:

 

  • SINGLE-PURPOSE: Replayable messages are “replayed” to their original queue and non-replayable messages are placed on a single designated queue for an MQ admin to investigate

 

  • MULTI-PURPOSE: Replayable messages are “replayed” to their original queue and non-replayable messages are placed on a designated queue for each application for an MQ admin to investigate

 

  • HYBRID-PURPOSE: Replayable and non-replayable messages from specific queues are placed on a designated queue for each application for an MQ admin to investigate, while other replayable messages are “replayed” to their original queue, and non-replayable messages are placed on a single designated queue for an MQ admin to investigate.

 

It is worth noting that since messages being put to the DLQ because of their destination queue being full (MQRC_Q_FULL) or put inhibited (MQRC_PUT_INHIBITIED) will be in constant retry mode.  That said, it’s important to have a monitoring and alerting setup on a queue manager and associated mq objects to send an alert (email, page, etc.) for someone to investigate why a queue is approaching 100% depth or put inhibited.

 

SCENARIO 1: SINGLE PURPOSE

This is a very common requirement. This can be achieved by defining the rule file.

Contents of <scenario1.rul> file:

******=====Retry every 30 seconds, and WAIT for messages =======*****
RETRYINT(30) WAIT(YES)

***** For reason queue full and put disabled *******
**** retry continuously to put the message on the original queue ****
REASON(MQRC_Q_FULL) ACTION(RETRY) RETRY(999999999)
REASON(MQRC_PUT_INHIBITED) ACTION(RETRY) RETRY(999999999)

**** For all other dlq messages, move those to designated queue *****
ACTION(FWD) FWDQ(UNPLAYABLE.DLQ) HEADER(YES)
******=========================================================*******

 

SCENARIO 2: MULTI PURPOSE

Suppose there are 3 applications using the qmgr: APP1, APP2 and APP3. The following are the queues for these applications and dedicated queue to hold dlq messages for each application:
APP1 –
APP1.QL
DLQ – APP1.ERR.DLQ
APP2 –
APP2.QL
DLQ – APP2.ERR.DLQ
APP3 –
APP3.QL
DLQ – APP3.ERR.DLQ

We now have to write rules based on the DESTQ for each application. Below the example rule file for this scenario.
Contents of <scenario2.rul> file:

******=====Retry every 30 seconds, and WAIT for messages =======*****
RETRYINT(30) WAIT(YES)
***** For reason queue full and put disabled *******
**** retry continuously to put the message on the original queue ****
REASON(MQRC_Q_FULL) ACTION(RETRY) RETRY(999999999)
REASON(MQRC_PUT_INHIBITED) ACTION(RETRY) RETRY(999999999)
******* For APP1, forward messages to APP1.ERR.DLQ ******
DESTQ(APP1.QL) ACTION(FWD) FWDQ(APP1.ERR.DLQ) HEADER(YES)

******* For APP2, forward messages to APP2.ERR.DLQ ******
DESTQ(APP2.QL) ACTION(FWD) FWDQ(APP2.ERR.DLQ) HEADER(YES)

******* For APP3, forward messages to APP3.ERR.DLQ ******
DESTQ(APP3.QL) ACTION(FWD) FWDQ(APP3.ERR.DLQ) HEADER(YES)
**** For all other dlq messages, move those to designated queue *****
ACTION(FWD) FWDQ(GENEAL.ERR.DLQ) HEADER(YES)
*********=========================================================******

 

SCENARIO 3: HYBRID PURPOSE 

This is an example of regardless as to why the message was put on the DLQ for APP1.QL, the message needs to be forwarded to APP1.ERR.DLQ.

For other non-APP1 DLQ messages, attempt to replay them to their intended destination queue for queue full and queue inhibited messages.

If the retry interval is > 10 or it’s a non-replayable message, forward that message to the UNPLAYABLE.DLQ.

Contents of <scenario3.rul> file

******=====Retry every 30 seconds, and WAIT for messages =======*****
RETRYINT(30) WAIT(YES)

******* For APP1, forward messages to APP1.ERR.DLQ ******
DESTQ(APP1.QL) ACTION(FWD) FWDQ(APP1.ERR.DLQ) HEADER(YES)

***** For reason queue full and put disabled *******
**** retry 10 times to put the message on the original queue ****
REASON(MQRC_Q_FULL) ACTION(RETRY) RETRY(10)
REASON(MQRC_PUT_INHIBITED) ACTION(RETRY) RETRY(10)
**** For all other dlq messages, move those to designated queue *****
ACTION(FWD) FWDQ(UNPLAYABLE.DLQ) HEADER(YES)
******=========================================================*******

 

STARTING THE DLQHANDLER

Once the rule file is created, you can configure dlqhandler startup in the following ways:

1) Manually starting the dlqhandler and keep it running.
runmqdlq QMGR_DLQ QMGR_NAME < qrulefile.rul

2) Configure dlqhandler as a service in the queue manager
SPECIAL NOTE FOR WINDOWS SERVERS:
DEFINE SERVICE(dlqhandler) +
SERVTYPE(SERVER) +
CONTROL(MANUAL) +
STARTCMD(‘c:\var\bin\mqdlq.bat’) +
DESCR(‘dlqhandler service’) +
STARTARG(‘DLQ QMNAME C:\var\rulefiles\qrulefile.rul’) +
STOPCMD(‘c:\var\bin\stopdlh.bat’) +
STOPARG(‘DLQ QMNAME’) +
STDOUT(‘/path/dlq.log’) +
STDERR(‘/path/dlq_error.log’)

NOTE: For Windows, because of the nature of how it doesn’t handle redirects “<” like Linux does, a wrapper script must be written.

Contents of mqdlq.bat (start command):
echo alt ql(%1) get(enabled) | runmqsc %2
runmqdlq.exe %1 %2 %3

Contents of stopldh.bat (stop command):
echo alt ql(%1) get(disabled) | runmqsc %2

Enabling and disabling the DLQ essentially kills the dead letter handler.

3) Configure triggering at DLQ to start dlq handler whenever first message arrives on the queue.

You can do it by simply following the steps on how to configure triggering on any queue. Please refer the following link for more information on triggering.

Once you setup triggering, dlqhandler will be started by triggering based on the condition that you set. You don’t need to have dlqhandler running all the time as it can be started again by triggering. Due to above reason, you don’t need WAIT(YES) in the rule file, you can change it to WAIT(NO).

 

WANT TO LEARN MORE?

TxMQ delivers an RTP Starter Pack to accelerate participant onboarding.  You can learn more about our starter pack here: http://txmq.com/RTP/

Our deep industry experience and subject matter experts are available to solve your most complex challenges.  We deliver solutions and innovations to do business in an ever-changing world, to guide & support your organization through the digital transformation journey.

TxMQ
Imagine. Transform. Engage.
We’re here to help you work smart in the new economy.

This post was authored by John Carr, Principle Integration Architect at TxMQ. You can follow John on LinkedIn here.

Real-Time Payment Network Administration: IBM MQ Expiry Reporting

Banking Image

Becoming a Real-Time Payments (RTP) “participant” has numerous challenges.  For many financial entities, this is their first exposure to IBM MQ as a messaging system, which is a requirement to join the network.

The Clearing House’s RTP represents the next evolution of payments innovation for instant transfer of funds and confirmation. Financial institutions, big and small, are offering Real-Time Payments to their customers to stay competitive.

This article provides basic scenarios and explanations in regards to expiry messages, and the IBM MQ generated reports identifying expired messages.

IBM MQ MESSAGE EXPIRY WITH RTP

At its fundamentals, the RTP network is built upon the concept of MQ request/reply. For example, a participant sends a request message following the ISO 20022 XML standard. These request/reply messages have a short lifespan, and if the message isn’t consumed by the expiry time set, an MQ Expiry Report is generated on the queue property and returned to the participant’s response queue.

For example, when generating a request, it is the participant app’s responsibility to set the message expiry duration and indicate if an expiry report should be generated to be seen on the queue property. Also, the reply-to queue and queue manager would be set by the participant app’s request message, to complete the request/reply round trip.

SCENARIO 1: HAPPY PATH

The overall lifespan of the “transaction” is fifteen (15) seconds, but the “hops” between the participant to RTP to participant and back are two (2) seconds each between its respective queue manager.  The Clearing House’s RTP documentation provides a chart to explain further and is outside the narrative of this article.  

When everything works through “happy path”, it’ll be as shown above: the participant app sends a request message and awaits a response within the 15-second window.

SCENARIO 2:  TCH-RTP APP DOES NOT READ MESSAGE

When the message is delivered to RTP and isn’t read off the queue in time, it’s considered “expired.”  An internal IBM MQ process detects this the next time an app opens the queue for input, then generates a report expiry message, and sends it back to your queue manager.  The message payload is the xml request message that the original participant app sent.

When you get the message, the way you can tell which queue manager generated the expiry message is in the Put Application Name within the MQMD.  In this case above, it’s from RTP.

SCENARIO 3: MESSAGE STUCK IN THE PARTICIPANT XMIT QUEUE

It is possible for your request message to get stuck on your XMIT queue.  This is due to the sender channel being unable to communicate with its companion receiver channel.  When the message is considered “expired,” an internal IBM MQ process detects this and generates the expiry report as shown above.  

When you get the message, the way you can tell which queue manager generated the expiry message is in the Put Application Name within the MQMD.  In this case above, it’s from your qmgr.

CONCLUSION

Setting message expiry is another hallmark of IBM MQ to ensure messages are consumed within a prescribed amount of time.  Expiry reports are a type of report generated by the queue manager if the message is discarded before delivery to an application.  

The concepts within this article are not limited to MQ messages between participants on the RTP network; they can be used between any IBM MQ-enabled applications. 

WANT TO LEARN MORE?

TxMQ delivers an RTP Starter Pack to accelerate participant onboarding.  You can learn more about our starter pack here: http://txmq.com/RTP/

Our deep industry experience and subject matter experts are available to solve your most complex challenges.  We deliver solutions and innovations to do business in an ever-changing world, to guide & support your organization through the digital transformation journey.

This post was authored by John Carr, Principle Integration Architect at TxMQ. You can follow John on LinkedIn here.

How Bank IT Leaders Can Get out of Reactive Mode (and Start Preparing for Tomorrow)

I spend a lot of time talking to IT professionals in banks across the US and Canada. Some large and global, others regional. As the CEO of a technology consultancy that works with financial institutions of all sizes, having varied conversations with our clients is a big part of my job. And I can tell you that almost every single one of them says the same thing: I’m so busy reacting to day-to-day issues that I just don’t have time to really plan for the future.

In other words, they’re always in a reactive mode as they deal with issues that range from minor (slow transaction processing) to major (catastrophic security breaches). But while playing whack-a-mole is critical to any bank, even a small shift in priorities can give CIOs and their teams the room to get ready for tomorrow rather than just focusing on today.

How to get out of reactive mode

Every bank technology person intuitively knows all this, of course, but it’s almost impossible for most to carve out the time to do any real planning. What they need are some ways to break the cycle. To that end, here are just a few suggestions for IT leaders, based on my experiences with bank IT organizations, to get out of reactive mode and start preparing for tomorrow.

Have a clear vision

A clear vision is important in all organizations. Knowing what we’re all marching towards not only helps keep teams focused and unified, but also ensures high morale and a sense of teamwork. The day-to-day menial tasks mean a lot more when understood in the context of the overall goal.

Break projects into smaller projects

As a runner, I’ve participated in my share of marathons, and what I can say is that I’ve never started one only to tell myself, “Okay, just 26.2 miles to go!” Rather, like most runners, I break the race down into digestible (and mentally palatable!) chunks. It starts with a 1 mile run. Then I work up to a 10k (about 6 miles), and so on, until I reach the final 5k.

Analogously, I’ve seen successful teams in large organizations do amazing things just by breaking huge, company-shifting tasks into smaller projects — smaller chunks that everyone can understand, get behind, and see the end of. Maybe it’s a three-week assessment to kick off a six-month work effort. Or maybe it’s a small development proof of concept before launching a huge software redeployment initiative slated to last months. Whatever the project, making things smaller allows people to enjoy the little successes, and helps keep teams focused.

Get buy-in from company leadership

IT leaders are constantly going to management asking for more money to fund their projects and operations. And a lot of times, management doesn’t want to give it to them. It’s a frustrating situation for both parties, to be sure, but consider that one of the reasons management might be so reluctant to divert even more money to IT is you have nothing to show them for all the cash they’ve put into it previously. In their minds, they keep giving you more money, but nothing really changes. You’re still putting out fires and playing whack-a-mole.

If, on the other hand, you’re able to show them a project that will ultimately improve operations (or improve the customer experience, or whatever your goal is) they’ll be a lot more likely to agree. As an IT leader, it’s your job to seek out these projects and bring them to business leaders’ attention.

Implement DevOps methodology

I find a lot of financial institutions are still stuck in the old ways of managing their application lifecycles. They tend to follow an approach — the so-called “waterfall” model — that’s horribly outdated. The waterfall model for building software essentially involves breaking down projects into sequential, linear phases. Each phase depends on the deliverables of the previous phase to begin work. While it sounds straightforward enough, the flaw with the waterfall model is that it doesn’t reflect the way software is actually used by employees and customers in the real world. The reality is, requirements and expectations change even as the application is being built, and a rigid methodology like the waterfall model lacks the responsiveness that’s required in today’s business environment.

To overcome these flaws, we recommend a DevOps methodology. DevOps combines software development with IT operations to shorten application development lifecycles and provide continuous delivery. In essence, DevOps practitioners work to increase communication between software development teams and IT teams, automating those communication processes wherever possible. This collaborative approach allows IT teams to get exactly what they need from development teams, faster, to do their job better. “Fail fast, fail often” is a common mantra. Encourage the failure, learn from it, and then iterate to improve.

DevOps is obviously a radical shift from the way many bank IT professionals are used to making and using enterprise software, and to really implement it right, you need someone well-versed in the practice. But implemented correctly, it has the capacity to kickstart an IT organization that’s stuck in a rut.

Getting ahead

As an IT consultant, I’ve heard all the answers in the book for why your organization can’t seem to get ahead of the day-to-day. But these excuses are just that: excuses. If you’re an IT leader, by definition you have the power to change your organization. You just need to exercise it effectively.

Remember: our world of technology has three pillars: people, process, and technology. No stool stands on two legs, nor does IT. Understand these three complementary components, and you’re well on your way to transforming your organization.

Why Banks Need to Start Thinking Like Tech Companies

Historically, for most Americans (and Canadians), the local bank branch has always been where you go not just to deposit and withdraw cash, but to manage your retirement or savings account, apply for a credit card and secure a home, car or small business loan. Today, however, the bank’s ascendancy is being challenged by the rise of alternative institutions and other scrappy players who are trying to tap into areas that were formerly the exclusive domain of banks. This category of emerging fintech companies includes online-only banks, credit unions, retirement planning apps, online lending marketplaces, peer-to-peer payment platforms and others too numerous to mention. And while banks may have the size advantage, nothing in business lasts forever. Do these Davids have a chance to slay Goliath? And what do the banks need to do to protect themselves from upstart challengers?

Studies indicate these new entities are giving banks a run for their money (no pun intended). The top five U.S. banks, for instance, accounted for only 21% of mortgage originations in 2019, compared to half of mortgages in 2011. Filling the gap are non-bank lenders, which not only offer a convenient, digital-first customer experience, but also tend to approve more applicants. Similar trends can be witnessed in small business loans and personal loans.

It’s not a stretch to say the traditional bank is facing an existential crisis. This has been partly brought on by a general lack of competition for so long. For example, at one point, towns had just one bank. This single bank didn’t have to innovate in the face of zero competition. That reality may have led to a decades-long attitude of complacency, which as a result, has led to a failure to innovate. Retail banks need to rethink pretty much everything. In short, they need to start thinking like a startup—more specifically, a tech startup. Silicon Valley is driven in large part by a philosophy of disruption, innovation and entrepreneurship. Many alternative lenders have been empowered by this philosophy, but that’s not to say that traditional banks can’t make use of it, too. Far from it, here are some ways that banks can start thinking more like tech companies so they can stay competitive against alternative providers.

Embrace lean methodology. 

Startups, by definition, lack the resources of more established businesses, but they don’t let those limitations stifle innovation. In fact, those limitations actually serve to encourage innovation. Lean methodology is a way of designing and bringing new products to market specifically designed to fit the limited financial resources of startup organizations. First outlined by entrepreneur Eric Ries in “The Lean Startup,” this approach emphasizes building and testing iteratively to reduce waste and achieve a better market fit.

To become vehicles of innovation, banks should consider adopting similar methodologies. I’m not suggesting that they should create artificial obstructions or arbitrary constraints. But no matter the size of the institution, budgets are always going to feel too small—not least of all because product developers for massive institutions need to develop huge products to match. With tried-and-true methodologies for innovation like the Lean Startup out there, scarcity shouldn’t be an excuse for not innovating. 

Fail fast, iterate often. Adopt Agile.

Startups know that rapid iteration cycles mean rapid innovation. It also means embracing a culture of failure. Failing to fail means failing to succeed. These are the lynchpins of agile or lean methodologies. Excellence is the enemy of success and progress. Get it done, get it out there in front of the market and then iterate improvements.

Identify opportunities with big data. 

One of the reasons alternative lenders are able to offer such high rates of approval is that they employ state-of-the-art AI and machine learning techniques to get a better picture of their customer than a simple credit or background check can deliver. Well-trained AI algorithms can efficiently comb through a wide body of available data to uncover trends and make predictions about the risk of lending to a given individual with incredible accuracy. 

Online-first lenders have such an advantage here because they’re in a better place to mine that data. What a lot of people forget about data analytics is that the greatest algorithms are only as good as the data you feed them. Businesses, and banks especially, generate millions of data points per day—data that could prove valuable for data mining and other similar uses. However, the majority of this data is unstructured and heterogeneous, and often time, siloed and difficult to access. Many successful online-first lenders have carefully structured their digital loan applications to be useful for data analytics purposes from the ground up. When nearly 40% of the work of data analytics is gathering and cleaning data, this represents a huge advantage to the fintech startup. 

But traditional banks can take advantage of this, too. Developing online and mobile banking applications to replace old-fashioned paper forms for most activities would set banks up to make better use of that data by ingesting it in a cleaner format. Add in the fact that customers are demanding mobile banking features anyway, and there’s no excuse for not offering customers a more robust set of mobile banking features.

Shrink bloated bureaucracy with cross-functional organizations.

Think about all the startups you’ve visited. Did teams operate in silos, constantly blaming other teams for their inability to make progress? Or did they adapt to situations, never believing their roles to be fixed or immutable?

To become the latter kind of organization, traditional banks need to break the cycle of bureaucratic apathy. One way to do that is to have disparate teams work together on projects. Working on shared projects not only helps develop a sense of shared purpose, but it also empowers employees to solve problems in areas that are not considered in their traditional wheelhouse. That, in turn, reduces the inefficiency of teams passing the baton to another division until it’s been weeks or months until the customer’s concern has even been truly considered. Moreover, bringing together different kinds of minds and thinkers encourages the kind of fertile ground in which innovation is known to thrive.

Reports of the bank’s death have been greatly exaggerated.

Ultimately, banks have numerous advantages that they can leverage over most fintech startups. They have their brick-and-mortar retail locations, allowing them to make personal connections with customers that drive loyalty. They’re considered more trustworthy to the average consumer (for the most part). And a lot of people just want to do all their banking at a single bank branch rather than shop around for various piecemeal banking solutions. If banks can innovate their information technology and organizational structures to meet the changing needs of today’s customers, they can continue to dominate the financial market.

 

Open Banking in the US?

Open Banking in the US

Can government intervention in banking actually encourage innovation rather than restrict it? That’s the question the U.K. government set out to answer with the implementation of its Open Banking directive.

This policy, which requires the country’s nine biggest banks to make customers’ financial data accessible by authorized third-party service providers, came into effect in January of 2018. Now, more than two years later, we’ve seen the results of this experiment first-hand, and the feedback has been quite positive overall. Furthermore, that success across the pond has given many U.S. banking leaders the confidence to start thinking about what similar regulation would look like here.

As a technologist who spends his days helping businesses modernize their IT infrastructures, I think this embrace (albeit cautious) of open banking is an extremely positive development. I’ve seen first-hand the benefits that open platforms can have in an industry, for customers and providers alike. That’s why I think it’s so important that business leaders at traditional banks understand just what open banking is. Because once they do, they’ll agree that open banking, whether it’s through government regulation or through their own action, is just what traditional banks need to stay competitive in our increasingly digital age.

What is open banking?

Let’s start with defining open banking. In short, it is the practice of opening up consumer financial data through APIs. A bank that embraces the open banking model will create APIs that define how a program can reliably and securely access its customers’ data.  In addition, there are opportunities outside the scope of this article to also monetize said APIs. Creating entirely net new revenue streams.

By creating these specifications, open banking simplifies the process of building third-party apps that need access to consumer data. In fact, you’ve probably been the beneficiary of open banking if you’ve ever used apps such as Mint, Wealthfront, Venmo or TurboTax. Even if you’re wary of just handing over your bank account credentials to a third party, you probably have no problem with checking a box that allows only select data, like transactions, to be shared to authorized and properly vetted apps. And that, in a nutshell, is the power of open banking: it engenders the kind of trust that startup or niche third-party digital service providers need in order to gain traction.

What is the U.K. model?

What I’ve just given is the technical definition of open banking. But open banking is also a political, or more specifically a regulatory, concept. In the U.K., the Second Payment Services Directive (“PSD2” for short) requires the country’s largest banks, including HSBC, Barclay’s and Lloyd’s, to make certain customer data accessible through APIs. Some of the goals of the directive are to increase competition, counteract monopoly-like effects, and make banks work harder to get (and please and retain) customers.

Those are laudable and important goals for maintaining a robust liberal economic system, but perhaps the most salient aspect of the directive for British citizens will be how it encourages innovation in banking. The idea is that by lowering the barrier to entry for fintech startups, PSD2 will lead to the creation of new startups and products that benefit consumers.

Why should banks embrace open banking?

If open banking is so good for startups and other non-banks, what incentive do major banks in the U.S. have to implement open banking? Aren’t they just enabling their competition and digging their own graves?

The answer is “not really,” but before we get into that I should note that customers are already starting to expect open banking-like features from their banks. They want it to be easy and secure to set up Apple Pay or integrate their transaction data into Mint or TurboTax. Banks must keep up with consumer expectations if they want to retain customers.

Additionally, most of these apps don’t really represent competition for banks; in fact, these services tend to be complementary or adjacent to traditional banking services. If a bank’s core business is stowing customers’ money and giving out loans, then it has little to fear from a personal finance or tax app using its customers’ data. All sharing data can do in that case is make their customers more responsible with their money.

The security benefits of open banking shouldn’t be downplayed, either. Defining exactly how apps can gain access to just the data they need will reduce the practice of customers handing over their account credentials to get the digital services they want. That represents a huge reduction in risk for banks with comparatively little investment on their part.

Bringing Banking into the 21st century

Finally, let’s not forget that U.S. banks are already dabbling in open banking voluntarily — at least selectively. That suggests banking leaders must already agree to some extent that opening up data drives innovation. And innovation is something the banking industry could definitely use a dose of. While fintech startups have made splashes specializing in making just one aspect of the consumer financial experience better, banks have attempted to expand their services into every little corner of the banking-adjacent market. What that results in is bloated organizations and unprofitable units siphoning resources from making banking better.

Banks should be using their resources to develop better APIs and deeper data analytics to help them make better loan decisions or catch fraud — not trying to get into the mobile app business. It’s unlikely that big banks will ever become “lean” organizations, but embracing open banking can at last allow banks to offload non-core work and get back to the fundamental services that make them profitable.

 

Can We Make Zelle Cool?

Let’s face it: people like using cool technology. They want the latest iPhone, the hottest games and the newest social platform. One can argue the relative merits of this obsession, or the need for the brightest and shiniest object, but the reality remains: everyone wants the latest, most up-to-date products and apps. Anyone who remembers moving from Myspace to Facebook knows this reality. And one of today’s “cool” apps is Venmo, the person-to-person payments app that boasts over 40 million users, 50 percent of them being millennials. That’s a big problem for Zelle®?, but it doesn’t have to be.

Digital payments platforms—the person-to-person kind—aren’t especially new. PayPal and Square have been around for more than 10 years, Stripe is newer, but still well established, and long-forgotten startups like Brodia and Qpass existed even before then. But somehow Venmo, a PayPal subsidiary after a recent acquisition, managed to become so hip that the company name actually became a verb. Think of it as the fun, cool little brother of the PayPal behemoth. Unfortunately, they were so successful that traditional financial institutions started wondering why they were being shut out of the incredibly lucrative mobile peer-to-peer payments market. That’s how we ended up with Zelle, which was launched in 2017 with the express purpose of beating Venmo at its own game.

At face value, Zelle checks off all the boxes. It was built for a specific task and does pretty much everything it’s supposed to do. But where it has succeeded on the technical side, it’s still coming up short where it really matters: user adoption. Chalk it up to any number of factors, from security concerns to Venmo’s massive head start, but the reality is an inescapable one: Venmo is cool and Zelle isn’t. Venmo is what millennials use while their parents use Zelle, or so the perception goes. That may not matter to C-level executives at banks, whose aspirations run far beyond what is cool, but it’s a huge determinant of success or failure in the marketplace.

The good news for Zelle is that this problem can be fixed. The bad news is that it’s going to take some heavy lifting to change consumer opinion and drive adoption by people who see Zelle in the same way that seventh graders see the assistant principal at the school dance. Even if he has the best moves in the room, he’ll always be a square.

It might be tempting to see this as a marketing problem, but if Zelle is going to gain market traction—and silence the naysayers—it’s going to need to make some technical changes to make it better, and more useful, than Venmo.

First, some low-hanging fruit. Venmo explicitly prohibits money transfers for the purchase of goods or services unless one is a Venmo verified merchant. It is meant for person-to-person payments. For Zelle, this is a massive opportunity to gain adoption where Venmo fears to tread. And because it’s backed by banks, they have the data and the AI capabilities to pull it off with much less risk than an independent player—even one as large as Venmo.

It could also be argued that if one can’t win on the cool factor—and cool is so intangible as to be ethereal—can a company really make a cool app? No, coolness flows from something as a result of its utility. Yet, one can win in other ways.

Be Friendly. For starters, apps need to be open and inviting. Big banks have a sometimes well-earned reputation for being fiercely competitive and protective of their turf. Millennials—or to be more broad, consumers under age 35—want openness, not interoperability. Though it’s an advantage, systems and tools that are perceived as playing nicely with other ecosystems are unnecessary. Big banks making the news for increased difficulty for consumers trying to set up Venmo is not the way to win favor with the target customers. Adopting Open Banking is.

Be open. Give consumers a choice. The banks that combined forces to create Zelle are all but forcing their customers to use it. With more banks in the queue to have Zelle as their primary P2P payments tool, more people will become Zelle users by default, not by choice. As a marketing strategy, this won’t work. Yes, adoption will grow, but customer satisfaction will drop. If the end game is a broader use of the bank and all of its products and services, Zelle is but one tool among many, not the end all, be all. Banks would do well to focus on the war to gain customers, not the battle to win users.

Simplify. When in doubt, software engineers and designers should focus on this one point. Although under-35-year-olds are known to be capable of understanding and learning technology easily, this doesn’t mean that they want to spend hours working through the app interface. When designing the app, remember that less is more. Simplify, eliminate options and remember KISS.

Integrate. Better yet, integrate with social media. Yes, Zelle has tried to copy this element of Venmo’s success, but somehow it feels awkward. Social media integration needs to be a priority, and it needs to be done well, something that Zelle still seems to be missing.

In the person-to-person payments discussion, there are many factors to consider. In the end though, this is just a skirmish in the broader contest for consumer-banking relationships. By forcing users to adopt, and adapt to, Zelle, banks are taking away agency from the consumers. This will inevitably result in Zelle never becoming the “cool” payments app. Banks need to consider that larger picture in this conversation, and that goes far beyond what app is victorious in the battle for what’s cool. This is not likely to be a winner take all situation, rather a gradual war of attrition, with many winners, and more losers.

Banks would do well to remember this.

Chuck Fried is president and chief executive of TxMQ. Prior to TxMQ, Chuck founded multiple businesses in the IT and other technology services spaces. Before that, he served in IT leadership roles in both the public and private sectors. In 2020, he was named an IBM Champion, affirming his commitment to IBM products, offerings and solutions.