DTG’s Craig Drabik and Canopyright Founder Jeffrey Hamilton with Hedera’s Gehrig Kunz on DLT and Blockchain in the Real-world

Cannabis entrepreneur Jeffrey Hamilton and TxMQ DTG Practice Manager Craig Drabik join host Gehrig Kunz on Blockchain in the Real-world to discuss cannabis IP rights and the role of the Hedera network in the development of Canopyright.

Canopyright employs distributed ledger technology on the Hedera network to provide a “decentralized herbarium where marijuana breeders can register their unique cultivars and transact business with growers effortlessly, securely, confidentially, and paperlessly.”

TxMQ’s Disruptive Technologies Group specializes in applying cutting edge technologies such as blockchain, IoT, and machine learning to solve critical business problems and open up new business models.

Ready to learn how DTG can help you use emerging technologies to disrupt your industry?  Contact us to schedule a free discovery session today!

 

Meet TxMQ Senior IBM Db2 for z/OS Consultant Sheryl M. Larsen

TxMQ Senior Db2 On zOS Consultant Sheryl M. Larsen

Sheryl M. Larsen has led an impressive career as a Db2 consultant and educator in her thirty-six years working in the technology consulting industry. In 2013, she was named World Wide Db2 Evangelist by IBM.

A former IBM Gold Consultant, IDUG hall of fame speaker and noted industry author, Sheryl has helped numerous clients optimize Db2 database performance, delivered countless presentations at industry events, written many articles featured in well-known technology publications, and co-authored the book “DB2 Answers! Certified Tech Support” published by McGraw-Hill/Osborne Media.

As the Senior Db2 for z/OS Consultant at TxMQ, Sheryl plays a major role in expanding our Db2 on z/OS offering portfolio. Through her commitment to this role, she has helped TxMQ improve its ability to help clients and partners optimize the role of Db2 on z/OS in technology infrastructures across numerous industries.

Before joining the TxMQ team, Sheryl was a Worldwide Db2 Z Solutions Sales Consultant at BMC Software from 2015 – 2021.

TxMQ’s IBM MQ Custom Docker Image and Helm Chart

Use TxMQ’s proprietary custom MQ Docker image and Helm chart for containerized MQ deployments in any cloud architecture.

TxMQ has developed a custom IBM MQ™ Docker image and Helm chart to address needs for containerized MQ deployments. Options include Kubernetes and Docker Compose, available for deployment in any Public Cloud or On-Premises environment. The solution supports the adoption of best practices when working with MQ. 

TxMQ’s custom image is designed for organizations who run Kubernetes clusters and use Helm for workload deployments, or who want to run the same MQ image in a Docker environment with Docker compose. 

  • Open Source, available under the MIT license. 
  • Supports Standalone and Multi-Instance High Availability deployments on Kubernetes clusters and with Docker on Docker compose.
  • Integrated with LDAP authentication and includes an LDAP server image. 
  • Imports cryptographic material for the Queue Manager and the web console
  • Applies MQ customizations and startup commands from the input volume and from a Git repository.

TxMQ’s custom container image simplifies development and administration of IBM MQ™. It supports MQ object configuration and includes starter projects to accelerate MQ adoption. MQ Objects can be easily tracked via source control and deployed instantly using Git.

TxMQ makes the custom image available to IBM MQ™ customers in a private repository. Support options are available from TxMQ. Please contact us today to get started! mqcustomcontainer@txmq.com

RTP Admin: Scenarios DLQ and Handler with IBM MQ

RTP

This article was developed with the intention to help IBM MQ Administrators gain a better understanding of Dead Letter Queues (DLQ) and DLQ Handlers for RTP administration. It provides basic scenarios and explanations within IBM MQ.

Becoming a Real Time Payments (RTP) “participant” has numerous challenges.  For many financial entities, this is their first exposure to IBM MQ as a messaging system, which is a requirement to join the network.

The Clearing House’s RTP represents the next evolution of payments innovation for instant transfer of funds and confirmation. Financial institutions, big and small, are offering Real Time Payments to their customers to stay competitive.

 

IBM MQ and RTP

At its fundamentals, the RTP network was built upon the concept of MQ request/reply.  For example, a participant sends a request message following the ISO 20022 XML standard.

MQ request/reply Example

The overall lifespan of the “transaction” is fifteen (15) seconds, but the “hops” between the participant to RTP to participant and back are two (2) seconds each between its respective queue manager.  The Clearing House’s RTP documentation provides a chart to explain further, and is outside the narrative of this article.

When everything works through “happy path”, it’ll be as shown above: the participant app sends a request message and awaits a response within the 15 second window.

But what happens if something goes wrong with the reply message being placed on RESPONSE.FROM.RTP1?

 

THE DEAD LETTER QUEUE AND THE DEAD LETTER HANDLER

THE DEAD LETTER QUEUE AND THE DEAD LETTER HANDLER

A feature of IBM MQ is the assured delivery of the messages to the destination. There can be number of scenarios where MQ may not be able to deliver messages to the destination queue, and all those are routed to the Dead Letter Queue (DLQ) defined for the queue manager.

Before placing message to the DLQ, IBM MQ attaches Dead Letter Header (DLH) to each message which contains all the required information to identify why the message was placed in DLQ and what was the intended destination. This DLH plays a very important role while handling the messages on DLQ.

THE DEAD LETTER QUEUE AND THE DEAD LETTER HANDLER 2

A feature of IBM MQ is the assured delivery of the messages to the destination. There can be number of scenarios where MQ may not be able to deliver messages to the destination queue, and all those are routed to the Dead Letter Queue (DLQ) defined for the queue manager.

Before placing message to the DLQ, IBM MQ attaches Dead Letter Header (DLH) to each message which contains all the required information to identify why the message was placed in DLQ and what was the intended destination. This DLH plays a very important role while handling the messages on DLQ.

THE DEAD LETTER QUEUE AND THE DEAD LETTER HANDLER 3

IBM has provided a Dead Letter Handler utility called “runmqdlq”.  Its purpose is to handle messages which are placed on the DLQ. With it, the routine takes necessary action on the messages placed on the DLQ.  A rule table can be created with required control data and rules and the same can be used as an input to the Dead Letter Handler.

 

TYPES OF DEAD LETTER MESSAGES

In a nutshell, messages placed on the DLQ are in two categories:
– Replayable Messages
– Non-RePlayable Messages

For Replayable Messages, these are placed on the DLQ due to some temporary issues like the destination queue is full or the destination queue is put disabled etc. These messages can be replayed without any further investigation. Also, re-playable messages can be performed by using dlqhandler.

For Non-Replayable Messages, these are placed on the DLQ due to issues which are not temporary in nature (incorrect destination, data conversion errors, mqrc_unknown_reason). These messages require further investigation to identify the cause on why those were placed on the DLQ. Replaying this category of messages with dlqhandler most likely will again arrive on DLQ. Therefore, they would require an MQ Admin to investigate further.

 

THE DLQ HANDLER SETUP

Replayable messages can be run again (reprocessed) by using the dlqhandler. In short, the dlqhandler is setup as follows:
1) Creating the dlqhandler rule file
2) Starting the dlqhandler

The rule file is a simple text-based flat file with configuration instructions as to how a message should be handled.

 

COMMON SCENARIOS

A number of scenarios are possible, based on the design of MQ infrastructure at an organization, but the most common scenarios are:

 

  • SINGLE-PURPOSE: Replayable messages are “replayed” to their original queue and non-replayable messages are placed on a single designated queue for an MQ admin to investigate

 

  • MULTI-PURPOSE: Replayable messages are “replayed” to their original queue and non-replayable messages are placed on a designated queue for each application for an MQ admin to investigate

 

  • HYBRID-PURPOSE: Replayable and non-replayable messages from specific queues are placed on a designated queue for each application for an MQ admin to investigate, while other replayable messages are “replayed” to their original queue, and non-replayable messages are placed on a single designated queue for an MQ admin to investigate.

 

It is worth noting that since messages being put to the DLQ because of their destination queue being full (MQRC_Q_FULL) or put inhibited (MQRC_PUT_INHIBITIED) will be in constant retry mode.  That said, it’s important to have a monitoring and alerting setup on a queue manager and associated mq objects to send an alert (email, page, etc.) for someone to investigate why a queue is approaching 100% depth or put inhibited.

 

SCENARIO 1: SINGLE PURPOSE

This is a very common requirement. This can be achieved by defining the rule file.

Contents of <scenario1.rul> file:

******=====Retry every 30 seconds, and WAIT for messages =======*****
RETRYINT(30) WAIT(YES)

***** For reason queue full and put disabled *******
**** retry continuously to put the message on the original queue ****
REASON(MQRC_Q_FULL) ACTION(RETRY) RETRY(999999999)
REASON(MQRC_PUT_INHIBITED) ACTION(RETRY) RETRY(999999999)

**** For all other dlq messages, move those to designated queue *****
ACTION(FWD) FWDQ(UNPLAYABLE.DLQ) HEADER(YES)
******=========================================================*******

 

SCENARIO 2: MULTI PURPOSE

Suppose there are 3 applications using the qmgr: APP1, APP2 and APP3. The following are the queues for these applications and dedicated queue to hold dlq messages for each application:
APP1 –
APP1.QL
DLQ – APP1.ERR.DLQ
APP2 –
APP2.QL
DLQ – APP2.ERR.DLQ
APP3 –
APP3.QL
DLQ – APP3.ERR.DLQ

We now have to write rules based on the DESTQ for each application. Below the example rule file for this scenario.
Contents of <scenario2.rul> file:

******=====Retry every 30 seconds, and WAIT for messages =======*****
RETRYINT(30) WAIT(YES)
***** For reason queue full and put disabled *******
**** retry continuously to put the message on the original queue ****
REASON(MQRC_Q_FULL) ACTION(RETRY) RETRY(999999999)
REASON(MQRC_PUT_INHIBITED) ACTION(RETRY) RETRY(999999999)
******* For APP1, forward messages to APP1.ERR.DLQ ******
DESTQ(APP1.QL) ACTION(FWD) FWDQ(APP1.ERR.DLQ) HEADER(YES)

******* For APP2, forward messages to APP2.ERR.DLQ ******
DESTQ(APP2.QL) ACTION(FWD) FWDQ(APP2.ERR.DLQ) HEADER(YES)

******* For APP3, forward messages to APP3.ERR.DLQ ******
DESTQ(APP3.QL) ACTION(FWD) FWDQ(APP3.ERR.DLQ) HEADER(YES)
**** For all other dlq messages, move those to designated queue *****
ACTION(FWD) FWDQ(GENEAL.ERR.DLQ) HEADER(YES)
*********=========================================================******

 

SCENARIO 3: HYBRID PURPOSE 

This is an example of regardless as to why the message was put on the DLQ for APP1.QL, the message needs to be forwarded to APP1.ERR.DLQ.

For other non-APP1 DLQ messages, attempt to replay them to their intended destination queue for queue full and queue inhibited messages.

If the retry interval is > 10 or it’s a non-replayable message, forward that message to the UNPLAYABLE.DLQ.

Contents of <scenario3.rul> file

******=====Retry every 30 seconds, and WAIT for messages =======*****
RETRYINT(30) WAIT(YES)

******* For APP1, forward messages to APP1.ERR.DLQ ******
DESTQ(APP1.QL) ACTION(FWD) FWDQ(APP1.ERR.DLQ) HEADER(YES)

***** For reason queue full and put disabled *******
**** retry 10 times to put the message on the original queue ****
REASON(MQRC_Q_FULL) ACTION(RETRY) RETRY(10)
REASON(MQRC_PUT_INHIBITED) ACTION(RETRY) RETRY(10)
**** For all other dlq messages, move those to designated queue *****
ACTION(FWD) FWDQ(UNPLAYABLE.DLQ) HEADER(YES)
******=========================================================*******

 

STARTING THE DLQHANDLER

Once the rule file is created, you can configure dlqhandler startup in the following ways:

1) Manually starting the dlqhandler and keep it running.
runmqdlq QMGR_DLQ QMGR_NAME < qrulefile.rul

2) Configure dlqhandler as a service in the queue manager
SPECIAL NOTE FOR WINDOWS SERVERS:
DEFINE SERVICE(dlqhandler) +
SERVTYPE(SERVER) +
CONTROL(MANUAL) +
STARTCMD(‘c:\var\bin\mqdlq.bat’) +
DESCR(‘dlqhandler service’) +
STARTARG(‘DLQ QMNAME C:\var\rulefiles\qrulefile.rul’) +
STOPCMD(‘c:\var\bin\stopdlh.bat’) +
STOPARG(‘DLQ QMNAME’) +
STDOUT(‘/path/dlq.log’) +
STDERR(‘/path/dlq_error.log’)

NOTE: For Windows, because of the nature of how it doesn’t handle redirects “<” like Linux does, a wrapper script must be written.

Contents of mqdlq.bat (start command):
echo alt ql(%1) get(enabled) | runmqsc %2
runmqdlq.exe %1 %2 %3

Contents of stopldh.bat (stop command):
echo alt ql(%1) get(disabled) | runmqsc %2

Enabling and disabling the DLQ essentially kills the dead letter handler.

3) Configure triggering at DLQ to start dlq handler whenever first message arrives on the queue.

You can do it by simply following the steps on how to configure triggering on any queue. Please refer the following link for more information on triggering.

Once you setup triggering, dlqhandler will be started by triggering based on the condition that you set. You don’t need to have dlqhandler running all the time as it can be started again by triggering. Due to above reason, you don’t need WAIT(YES) in the rule file, you can change it to WAIT(NO).

 

WANT TO LEARN MORE?

TxMQ delivers an RTP Starter Pack to accelerate participant onboarding.  You can learn more about our starter pack here: http://txmq.com/RTP/

Our deep industry experience and subject matter experts are available to solve your most complex challenges.  We deliver solutions and innovations to do business in an ever-changing world, to guide & support your organization through the digital transformation journey.

TxMQ
Imagine. Transform. Engage.
We’re here to help you work smart in the new economy.

This post was authored by John Carr, Principle Integration Architect at TxMQ. You can follow John on LinkedIn here.

IBM MQ v9.2 Release Update

Great news MQ Users! IBM has just released the new IBM MQ v9.2 and it’s ready for download.

The new version has many features that continue to make IBM MQ the industry leader for messaging middleware software. MQ v9.2 is the follow-on long-term support release (LTSR) to MQ v9.1. With the LTSR release, fix packs will only provide fixes that include no new functions. MQ v9.2 includes the features that were delivered in the CD releases of MQ versions v9.1.1 to v9.1.5, along with some minor enhancements. The capabilities being incorporated into MQ v9.2 from previous MQ v9.1.x CD releases and those that are new in MQ v9.2 are detailed in the Description section. For more information, see the IBM MQ FAQ for Long Term Support and Continuous Delivery releases website.

This release encompasses all versions of MQ including the MQ Appliance and MQ on Cloud. Below we’ve outlined a few of the new enhancements included in the release.

MQ v9.2 includes enhancements in the following areas:

  • Simplified, smart management
  • Augmented security
  • Expanded client application options
  • Easier adoption of containers
  • More flexible design and deployment

*For more info on the new release view the IBM MQ 9.2 release document here.

*Find out more about what’s new in Version 9.2.0

*Find out what’s changed in Version 9.2.0

Extended Support for your IBM MQ 

Keep in mind that although IBM has extended support for IBM MQ v8.x through September 30th, now’s the time to get your plans in order. If you need help determining an upgrade path or choosing the right option for End Of Service or End Of Life Support for your out of date software, contact TxMQ for a free discovery call.

If you’re interested in learning more about how TxMQ can help support your MQ, take a closer look at some of our services on our IBM MQ and WebSphere Support page.

Feel free to reach out anytime through the form below, we’d love to help you take the next steps towards upgrading your IBM MQ, or assist in choosing your best support options.

 

Real-Time Payment Network Administration: IBM MQ Expiry Reporting

Banking Image

Becoming a Real-Time Payments (RTP) “participant” has numerous challenges.  For many financial entities, this is their first exposure to IBM MQ as a messaging system, which is a requirement to join the network.

The Clearing House’s RTP represents the next evolution of payments innovation for instant transfer of funds and confirmation. Financial institutions, big and small, are offering Real-Time Payments to their customers to stay competitive.

This article provides basic scenarios and explanations in regards to expiry messages, and the IBM MQ generated reports identifying expired messages.

IBM MQ MESSAGE EXPIRY WITH RTP

At its fundamentals, the RTP network is built upon the concept of MQ request/reply. For example, a participant sends a request message following the ISO 20022 XML standard. These request/reply messages have a short lifespan, and if the message isn’t consumed by the expiry time set, an MQ Expiry Report is generated on the queue property and returned to the participant’s response queue.

For example, when generating a request, it is the participant app’s responsibility to set the message expiry duration and indicate if an expiry report should be generated to be seen on the queue property. Also, the reply-to queue and queue manager would be set by the participant app’s request message, to complete the request/reply round trip.

SCENARIO 1: HAPPY PATH

The overall lifespan of the “transaction” is fifteen (15) seconds, but the “hops” between the participant to RTP to participant and back are two (2) seconds each between its respective queue manager.  The Clearing House’s RTP documentation provides a chart to explain further and is outside the narrative of this article.  

When everything works through “happy path”, it’ll be as shown above: the participant app sends a request message and awaits a response within the 15-second window.

SCENARIO 2:  TCH-RTP APP DOES NOT READ MESSAGE

When the message is delivered to RTP and isn’t read off the queue in time, it’s considered “expired.”  An internal IBM MQ process detects this the next time an app opens the queue for input, then generates a report expiry message, and sends it back to your queue manager.  The message payload is the xml request message that the original participant app sent.

When you get the message, the way you can tell which queue manager generated the expiry message is in the Put Application Name within the MQMD.  In this case above, it’s from RTP.

SCENARIO 3: MESSAGE STUCK IN THE PARTICIPANT XMIT QUEUE

It is possible for your request message to get stuck on your XMIT queue.  This is due to the sender channel being unable to communicate with its companion receiver channel.  When the message is considered “expired,” an internal IBM MQ process detects this and generates the expiry report as shown above.  

When you get the message, the way you can tell which queue manager generated the expiry message is in the Put Application Name within the MQMD.  In this case above, it’s from your qmgr.

CONCLUSION

Setting message expiry is another hallmark of IBM MQ to ensure messages are consumed within a prescribed amount of time.  Expiry reports are a type of report generated by the queue manager if the message is discarded before delivery to an application.  

The concepts within this article are not limited to MQ messages between participants on the RTP network; they can be used between any IBM MQ-enabled applications. 

WANT TO LEARN MORE?

TxMQ delivers an RTP Starter Pack to accelerate participant onboarding.  You can learn more about our starter pack here: http://txmq.com/RTP/

Our deep industry experience and subject matter experts are available to solve your most complex challenges.  We deliver solutions and innovations to do business in an ever-changing world, to guide & support your organization through the digital transformation journey.

This post was authored by John Carr, Principle Integration Architect at TxMQ. You can follow John on LinkedIn here.

How Bank IT Leaders Can Get out of Reactive Mode (and Start Preparing for Tomorrow)

I spend a lot of time talking to IT professionals in banks across the US and Canada. Some large and global, others regional. As the CEO of a technology consultancy that works with financial institutions of all sizes, having varied conversations with our clients is a big part of my job. And I can tell you that almost every single one of them says the same thing: I’m so busy reacting to day-to-day issues that I just don’t have time to really plan for the future.

In other words, they’re always in a reactive mode as they deal with issues that range from minor (slow transaction processing) to major (catastrophic security breaches). But while playing whack-a-mole is critical to any bank, even a small shift in priorities can give CIOs and their teams the room to get ready for tomorrow rather than just focusing on today.

How to get out of reactive mode

Every bank technology person intuitively knows all this, of course, but it’s almost impossible for most to carve out the time to do any real planning. What they need are some ways to break the cycle. To that end, here are just a few suggestions for IT leaders, based on my experiences with bank IT organizations, to get out of reactive mode and start preparing for tomorrow.

Have a clear vision

A clear vision is important in all organizations. Knowing what we’re all marching towards not only helps keep teams focused and unified, but also ensures high morale and a sense of teamwork. The day-to-day menial tasks mean a lot more when understood in the context of the overall goal.

Break projects into smaller projects

As a runner, I’ve participated in my share of marathons, and what I can say is that I’ve never started one only to tell myself, “Okay, just 26.2 miles to go!” Rather, like most runners, I break the race down into digestible (and mentally palatable!) chunks. It starts with a 1 mile run. Then I work up to a 10k (about 6 miles), and so on, until I reach the final 5k.

Analogously, I’ve seen successful teams in large organizations do amazing things just by breaking huge, company-shifting tasks into smaller projects — smaller chunks that everyone can understand, get behind, and see the end of. Maybe it’s a three-week assessment to kick off a six-month work effort. Or maybe it’s a small development proof of concept before launching a huge software redeployment initiative slated to last months. Whatever the project, making things smaller allows people to enjoy the little successes, and helps keep teams focused.

Get buy-in from company leadership

IT leaders are constantly going to management asking for more money to fund their projects and operations. And a lot of times, management doesn’t want to give it to them. It’s a frustrating situation for both parties, to be sure, but consider that one of the reasons management might be so reluctant to divert even more money to IT is you have nothing to show them for all the cash they’ve put into it previously. In their minds, they keep giving you more money, but nothing really changes. You’re still putting out fires and playing whack-a-mole.

If, on the other hand, you’re able to show them a project that will ultimately improve operations (or improve the customer experience, or whatever your goal is) they’ll be a lot more likely to agree. As an IT leader, it’s your job to seek out these projects and bring them to business leaders’ attention.

Implement DevOps methodology

I find a lot of financial institutions are still stuck in the old ways of managing their application lifecycles. They tend to follow an approach — the so-called “waterfall” model — that’s horribly outdated. The waterfall model for building software essentially involves breaking down projects into sequential, linear phases. Each phase depends on the deliverables of the previous phase to begin work. While it sounds straightforward enough, the flaw with the waterfall model is that it doesn’t reflect the way software is actually used by employees and customers in the real world. The reality is, requirements and expectations change even as the application is being built, and a rigid methodology like the waterfall model lacks the responsiveness that’s required in today’s business environment.

To overcome these flaws, we recommend a DevOps methodology. DevOps combines software development with IT operations to shorten application development lifecycles and provide continuous delivery. In essence, DevOps practitioners work to increase communication between software development teams and IT teams, automating those communication processes wherever possible. This collaborative approach allows IT teams to get exactly what they need from development teams, faster, to do their job better. “Fail fast, fail often” is a common mantra. Encourage the failure, learn from it, and then iterate to improve.

DevOps is obviously a radical shift from the way many bank IT professionals are used to making and using enterprise software, and to really implement it right, you need someone well-versed in the practice. But implemented correctly, it has the capacity to kickstart an IT organization that’s stuck in a rut.

Getting ahead

As an IT consultant, I’ve heard all the answers in the book for why your organization can’t seem to get ahead of the day-to-day. But these excuses are just that: excuses. If you’re an IT leader, by definition you have the power to change your organization. You just need to exercise it effectively.

Remember: our world of technology has three pillars: people, process, and technology. No stool stands on two legs, nor does IT. Understand these three complementary components, and you’re well on your way to transforming your organization.

Why Banks Need to Start Thinking Like Tech Companies

Historically, for most Americans (and Canadians), the local bank branch has always been where you go not just to deposit and withdraw cash, but to manage your retirement or savings account, apply for a credit card and secure a home, car or small business loan. Today, however, the bank’s ascendancy is being challenged by the rise of alternative institutions and other scrappy players who are trying to tap into areas that were formerly the exclusive domain of banks. This category of emerging fintech companies includes online-only banks, credit unions, retirement planning apps, online lending marketplaces, peer-to-peer payment platforms and others too numerous to mention. And while banks may have the size advantage, nothing in business lasts forever. Do these Davids have a chance to slay Goliath? And what do the banks need to do to protect themselves from upstart challengers?

Studies indicate these new entities are giving banks a run for their money (no pun intended). The top five U.S. banks, for instance, accounted for only 21% of mortgage originations in 2019, compared to half of mortgages in 2011. Filling the gap are non-bank lenders, which not only offer a convenient, digital-first customer experience, but also tend to approve more applicants. Similar trends can be witnessed in small business loans and personal loans.

It’s not a stretch to say the traditional bank is facing an existential crisis. This has been partly brought on by a general lack of competition for so long. For example, at one point, towns had just one bank. This single bank didn’t have to innovate in the face of zero competition. That reality may have led to a decades-long attitude of complacency, which as a result, has led to a failure to innovate. Retail banks need to rethink pretty much everything. In short, they need to start thinking like a startup—more specifically, a tech startup. Silicon Valley is driven in large part by a philosophy of disruption, innovation and entrepreneurship. Many alternative lenders have been empowered by this philosophy, but that’s not to say that traditional banks can’t make use of it, too. Far from it, here are some ways that banks can start thinking more like tech companies so they can stay competitive against alternative providers.

Embrace lean methodology. 

Startups, by definition, lack the resources of more established businesses, but they don’t let those limitations stifle innovation. In fact, those limitations actually serve to encourage innovation. Lean methodology is a way of designing and bringing new products to market specifically designed to fit the limited financial resources of startup organizations. First outlined by entrepreneur Eric Ries in “The Lean Startup,” this approach emphasizes building and testing iteratively to reduce waste and achieve a better market fit.

To become vehicles of innovation, banks should consider adopting similar methodologies. I’m not suggesting that they should create artificial obstructions or arbitrary constraints. But no matter the size of the institution, budgets are always going to feel too small—not least of all because product developers for massive institutions need to develop huge products to match. With tried-and-true methodologies for innovation like the Lean Startup out there, scarcity shouldn’t be an excuse for not innovating. 

Fail fast, iterate often. Adopt Agile.

Startups know that rapid iteration cycles mean rapid innovation. It also means embracing a culture of failure. Failing to fail means failing to succeed. These are the lynchpins of agile or lean methodologies. Excellence is the enemy of success and progress. Get it done, get it out there in front of the market and then iterate improvements.

Identify opportunities with big data. 

One of the reasons alternative lenders are able to offer such high rates of approval is that they employ state-of-the-art AI and machine learning techniques to get a better picture of their customer than a simple credit or background check can deliver. Well-trained AI algorithms can efficiently comb through a wide body of available data to uncover trends and make predictions about the risk of lending to a given individual with incredible accuracy. 

Online-first lenders have such an advantage here because they’re in a better place to mine that data. What a lot of people forget about data analytics is that the greatest algorithms are only as good as the data you feed them. Businesses, and banks especially, generate millions of data points per day—data that could prove valuable for data mining and other similar uses. However, the majority of this data is unstructured and heterogeneous, and often time, siloed and difficult to access. Many successful online-first lenders have carefully structured their digital loan applications to be useful for data analytics purposes from the ground up. When nearly 40% of the work of data analytics is gathering and cleaning data, this represents a huge advantage to the fintech startup. 

But traditional banks can take advantage of this, too. Developing online and mobile banking applications to replace old-fashioned paper forms for most activities would set banks up to make better use of that data by ingesting it in a cleaner format. Add in the fact that customers are demanding mobile banking features anyway, and there’s no excuse for not offering customers a more robust set of mobile banking features.

Shrink bloated bureaucracy with cross-functional organizations.

Think about all the startups you’ve visited. Did teams operate in silos, constantly blaming other teams for their inability to make progress? Or did they adapt to situations, never believing their roles to be fixed or immutable?

To become the latter kind of organization, traditional banks need to break the cycle of bureaucratic apathy. One way to do that is to have disparate teams work together on projects. Working on shared projects not only helps develop a sense of shared purpose, but it also empowers employees to solve problems in areas that are not considered in their traditional wheelhouse. That, in turn, reduces the inefficiency of teams passing the baton to another division until it’s been weeks or months until the customer’s concern has even been truly considered. Moreover, bringing together different kinds of minds and thinkers encourages the kind of fertile ground in which innovation is known to thrive.

Reports of the bank’s death have been greatly exaggerated.

Ultimately, banks have numerous advantages that they can leverage over most fintech startups. They have their brick-and-mortar retail locations, allowing them to make personal connections with customers that drive loyalty. They’re considered more trustworthy to the average consumer (for the most part). And a lot of people just want to do all their banking at a single bank branch rather than shop around for various piecemeal banking solutions. If banks can innovate their information technology and organizational structures to meet the changing needs of today’s customers, they can continue to dominate the financial market.

 

Enhancing the Customer Experience With AI

Enhancing the Customer Experience

Even as artificial intelligence-based technologies continue to permeate more and more aspects of our lives, I’ve noticed a stubborn tendency among the banking old guard to focus on improving operational efficiencies rather than enhancing the customer experience. That may make sense on paper, but it’s ultimately bad for long-term user satisfaction.

It’s easy to realize efficiency gains in the call center by automating customer support with AI. It’s far harder to understand what customers really want out of their experience and then use AI to drive towards that. AI offers multiple vectors of possible change, from identifying patterns in otherwise noisy data, to uncovering more appropriate, relevant means of evaluating creditworthiness. We are only just beginning to scratch the surface of what these technologies can offer.

Yet in a world of unintended consequences, it is important that we don’t just innovate for innovation’s sake. Too much technology and we lose the personal touch that we know remains critical for any business. What’s important to understand is that AI is more than just chatbots. In fact, AI can have a positive impact across nearly every aspect of the customer experience. To demonstrate that, I’ve laid out just four of the diverse ways that AI can enhance the banking customer experience.

1 – Conversational Banking

Conversational banking is the closest of these technologies to the popular conception of a chatbot. But let’s be clear: these aren’t your parents’ chatbots. Advances in AI, in particular natural language processing, have empowered businesses to take a lot of boring or routine conversations off the phone and into the online realm. Customers don’t even have to do anything special to communicate with these systems because they handle a variety of natural speech patterns and can even work with syntax they’ve never seen before and still find the answer to customers’ questions. That’s a win-win because customers get the experience of talking to a live person without the pain of having to wait on hold, and banks can save money on call centers while improving the user experience.

2 – Lowering Default Risk

The only thing worse than waiting for an answer is being rejected for credit cards and loan applications. At many banks, this process is still overseen by individuals, and it can take days for customers to get a response. But not only is the process slow, it’s also not particularly accurate either — especially because banks mostly rely on an applicant’s credit score to determine their approval. 

Thankfully, AI developers have created far more accurate solutions that take into account factors such as the amount of equity their customers hold, their job stability, and their debt-to-income ratio. Training these algorithms on large bodies of data, they’ve created programs that approve or deny candidates more quickly and accurately than traditional methods. Having embraced the use of these algorithms from the beginning, many alternative lenders have seen a higher rate of approval for business and consumer loans than traditional banks — an advantage that both they and their customers enjoy. Traditional banks must follow suit.

3 – Improving Security

Biometric security methods don’t just make it harder for criminals to access phones, they actually improve the customer experience over passwords and passphrases by promoting ease of access. Between the ubiquitous smartphone thumbprint reader and increasingly common face-recognition feature sets, customers have clearly embraced the quickness and ease of using biometrics to unlock their phone, make payments, and access other sensitive accounts in place of using hard-to-remember passcodes and other phrases. And they expect that ease of access from their banks, too.

What makes today’s biometrics possible, of course, are advanced AI-based algorithms that take in data from cameras or other sensors, identify key face or fingerprint points, and then compare them against user-provided scans to make an accurate (usually 98% or higher) prediction about whether the user attempting to gain access is the owner of the phone or account. With biometrics being clearly the superior option for both security and customer experience, banks have plenty of incentives to offer biometric authentication wherever they can.

4 – Identifying Fraud

Perhaps AI’s biggest benefit, from a business perspective, is its ability to predict future behavior based on past behavior. Feed a well-trained AI algorithm real-time data, and it can predict with far greater-than-human accuracy the likely outcome. That capability makes it invaluable for banks, which often have to make quick decisions about large sums of money.

Accurately identifying fraud is extremely important to both banks and their customers, and not just for catching cases of credit card fraud or stolen identity. You also don’t want a system that’s too sensitive. (We all have that friend — maybe we are that friend — whose bank uses an overly-aggressive fraud detection algorithm.) By using AI to analyze past instances of confirmed fraud, banks can uncover patterns that help them determine the likelihood of a current transaction being fraudulent. This means avoiding devastating cases of theft, on the one hand, and annoying credit lockdown situations on the other.

Smarter. Better. Faster.

There’s no doubt that artificial intelligence will continue to revolutionize the way customers interact with banks and businesses, and as the 2020s progress, we can expect that rate of innovation to only accelerate. To continue to compete in an ecosystem that is being radically transformed before our eyes, banks must stop playing catch-up when it comes to AI. They must seize this opportunity to modernize their IT infrastructures, not just to ensure that they can power today’s innovative solutions, but also to ensure they’re at the leading edge of whatever innovations tomorrow’s AI researchers develop.

Open Banking in the US?

Open Banking in the US

Can government intervention in banking actually encourage innovation rather than restrict it? That’s the question the U.K. government set out to answer with the implementation of its Open Banking directive.

This policy, which requires the country’s nine biggest banks to make customers’ financial data accessible by authorized third-party service providers, came into effect in January of 2018. Now, more than two years later, we’ve seen the results of this experiment first-hand, and the feedback has been quite positive overall. Furthermore, that success across the pond has given many U.S. banking leaders the confidence to start thinking about what similar regulation would look like here.

As a technologist who spends his days helping businesses modernize their IT infrastructures, I think this embrace (albeit cautious) of open banking is an extremely positive development. I’ve seen first-hand the benefits that open platforms can have in an industry, for customers and providers alike. That’s why I think it’s so important that business leaders at traditional banks understand just what open banking is. Because once they do, they’ll agree that open banking, whether it’s through government regulation or through their own action, is just what traditional banks need to stay competitive in our increasingly digital age.

What is open banking?

Let’s start with defining open banking. In short, it is the practice of opening up consumer financial data through APIs. A bank that embraces the open banking model will create APIs that define how a program can reliably and securely access its customers’ data.  In addition, there are opportunities outside the scope of this article to also monetize said APIs. Creating entirely net new revenue streams.

By creating these specifications, open banking simplifies the process of building third-party apps that need access to consumer data. In fact, you’ve probably been the beneficiary of open banking if you’ve ever used apps such as Mint, Wealthfront, Venmo or TurboTax. Even if you’re wary of just handing over your bank account credentials to a third party, you probably have no problem with checking a box that allows only select data, like transactions, to be shared to authorized and properly vetted apps. And that, in a nutshell, is the power of open banking: it engenders the kind of trust that startup or niche third-party digital service providers need in order to gain traction.

What is the U.K. model?

What I’ve just given is the technical definition of open banking. But open banking is also a political, or more specifically a regulatory, concept. In the U.K., the Second Payment Services Directive (“PSD2” for short) requires the country’s largest banks, including HSBC, Barclay’s and Lloyd’s, to make certain customer data accessible through APIs. Some of the goals of the directive are to increase competition, counteract monopoly-like effects, and make banks work harder to get (and please and retain) customers.

Those are laudable and important goals for maintaining a robust liberal economic system, but perhaps the most salient aspect of the directive for British citizens will be how it encourages innovation in banking. The idea is that by lowering the barrier to entry for fintech startups, PSD2 will lead to the creation of new startups and products that benefit consumers.

Why should banks embrace open banking?

If open banking is so good for startups and other non-banks, what incentive do major banks in the U.S. have to implement open banking? Aren’t they just enabling their competition and digging their own graves?

The answer is “not really,” but before we get into that I should note that customers are already starting to expect open banking-like features from their banks. They want it to be easy and secure to set up Apple Pay or integrate their transaction data into Mint or TurboTax. Banks must keep up with consumer expectations if they want to retain customers.

Additionally, most of these apps don’t really represent competition for banks; in fact, these services tend to be complementary or adjacent to traditional banking services. If a bank’s core business is stowing customers’ money and giving out loans, then it has little to fear from a personal finance or tax app using its customers’ data. All sharing data can do in that case is make their customers more responsible with their money.

The security benefits of open banking shouldn’t be downplayed, either. Defining exactly how apps can gain access to just the data they need will reduce the practice of customers handing over their account credentials to get the digital services they want. That represents a huge reduction in risk for banks with comparatively little investment on their part.

Bringing Banking into the 21st century

Finally, let’s not forget that U.S. banks are already dabbling in open banking voluntarily — at least selectively. That suggests banking leaders must already agree to some extent that opening up data drives innovation. And innovation is something the banking industry could definitely use a dose of. While fintech startups have made splashes specializing in making just one aspect of the consumer financial experience better, banks have attempted to expand their services into every little corner of the banking-adjacent market. What that results in is bloated organizations and unprofitable units siphoning resources from making banking better.

Banks should be using their resources to develop better APIs and deeper data analytics to help them make better loan decisions or catch fraud — not trying to get into the mobile app business. It’s unlikely that big banks will ever become “lean” organizations, but embracing open banking can at last allow banks to offload non-core work and get back to the fundamental services that make them profitable.