Introducing Aviator DLT by TXMQ’s DTG

In 2017, TxMQ’s Disruptive Technologies Group created Exo – an open-source framework for developing applications using the Swirlds Hashgraph consensus SDK. Our intention for Exo was to make it easier for Java developers to build Hashgraph and other DLT applications. It provided an application architecture for organizing business logic, and an integration layer over REST or Java sockets. Version 2 of the framework introduced a pipeline model for processing transactions and the ability to monitor transactions throughout their life cycle over web sockets.

TxMQ used Exo as the basis of work we’ve delivered for our customers, as well as the foundation for a number of proofs-of-concept. As the framework has continued to mature, we began to realize its potential as the backbone for a private distributed ledger platform.

By keeping the programming and integration model consistent, we are able to offer a configurable platform that is friendlier to enterprise developers who don’t come from a distributed ledger background. We wanted developers to be able to maximize the investment they’ve made in the skills they already have, instead of having to tackle the considerable learning curve associated with new languages and environments.

Enterprises, like developers, also require approachability – though from a different perspective. Enterprise IT is an ecosystem in which any number of applications, databases, APIs, and clients are interconnected. For enterprises, distributed ledger is another tool that needs to live in the ecosystem. In order for DLT to succeed in an enterprise setting, it needs to be integrated into the existing ecosystem. It needs to be manageable in a way that fits with how enterprise IT manages infrastructure. From the developer writing the first line of code for their new DApp all the way down to the team that manages the deployment and maintenance of that DApp, everyone needs tooling to help them come to grips with this new thing called DLT. And so the idea for Aviator was born!

Aviator is an application platform and toolset for developing DLT applications. We like to say that it is enterprise-focused yet startup-ready. We want to enable the development of private ledger applications that sit comfortably in enterprise IT environments, flattening the learning curve for everyone involved.

There are three components of Aviator: The core platform, developer tools, and management tools.

Think of the core platform as an application server for enterprise DApps. It hosts your APIs, runs your business logic, handles security, and holds your application data. Each of those components is meant to be configurable so Aviator can work with the infrastructure and skill-sets you already have. We’ll be able to integrate with any Hibernate-supported relational database, plus NoSQL datastores like MongoDB or CouchDB. We’ll be delivering smart contract engines for languages commonly used in enterprise development, like Javascript, Java, and C#. Don’t worry if you’re a Solidity or Python developer, we have you on our radar too. The core platform will provide a security mechanism based on a public key infrastructure, which can be integrated into your organization’s directory-based security scheme or PKI if one is already in place. We can even tailor the consensus mechanism to the needs of an application or enterprise.

Developing and testing DApps can be complicated, especially when those applications are integrated into larger architectures. You’re likely designing and developing client code, an API layer, business logic, and persistence. You’re also likely writing a lot of boilerplate code. Debugging an application in a complicated environment can also be very challenging. Aviator developer tools help to address these challenges. Aviator can generate a lot of your code from Open API (Swagger) documents in a way that’s designed to work seamlessly with the platform. This frees developers to concentrate on the important parts and cuts down on the number of bugs introduced through hand-coding. We’ve got tools to help you deploy and test smart contracts and more tools to help you look at the data and make sure everything is doing what is supposed to do. Finally, we’re working on ways to use those tools the way developers will want to use them, whether that’s through integrations with existing IDEs like Visual Studio Code or Eclipse, or in an Aviator-focused IDE.

The work doesn’t end when the developers have delivered. Deploying and managing development, QA, and production DLT networks is seriously challenging. DLT architectures include a number of components, deployed across a number of physical or virtual machines, scaled across a number of identical nodes. Aviator aims to have IT systems administrators and managers covered there as well. We’re working on a toolset for visually designing your DLT network infrastructure, and a way to automatically deploy that design to your physical or virtual hardware. We’ll be delivering tooling to monitor and manage those assets through our own management tooling, or by integrating into the network management tooling your enterprise may already have. This is an area where even the most mature DLT platforms struggle, and there are exciting opportunities to lower frictions when managing DLT networks through better management capabilities.

So what does this all mean for Exo, the framework that started our remarkable journey? For starters, it’s getting a new name and a new GitHub. Exo has become the Aviator Core Framework, and can now be found on TxMQ’s official GitHub at https://github.com/txmq. TxMQ is committed to maintaining the core framework as a free, open source development framework that anyone can use to develop applications based on Swirlds Hashgraph. The framework is a critical component of the Aviator Platform, and TxMQ will continue to develop and maintain it. There will be a path for applications developed on the framework to be deployed on the Aviator Platform should you decide to take advantage of the platform’s additional capabilities.

For more information on Aviator, please visit our website at http://aviatordlt.com and sign up for updates.

 

 

 

 

Which of these MQ mistakes are you making?

IBM MQ Logo

IBM MQ Tips Straight From TxMQ Subject Matter Experts

At TxMQ we have extensive experience helping clients with their IBM MQ deployments including planning, integrations, management, upgrades, & enhancements. Throughout the years we’ve seen just about everything, and we have found there are some common mistakes that are easy to stay away from with a little insight. Here are some a few tips to keep you up and running:IBM MQ is a powerful tool that makes a huge difference in your life every day, but for most of us you only notice it when it’s not working. One small mistake can cause havoc to your entire system.

1. Don’t use MQ as a database. MQ is for moving messages from one application or system to another. Databases are the data repository of choice. MQ resources are most optimized when considering data throughput and message delivery efficiency. IBM MQ is optimized when messages are kept small.

2. Don’t expect assured delivery if you didn’t design for it! Assured one-time message delivery is provided by IBM MQ through setting message persistence and advanced planning in the application integration design process. This plans for poisoned message handling that could otherwise cause issues and failures or worse, unexpected results. Know your business requirements for quality of service for message delivery and make sure your integration design accommodates advanced settings and features as appropriate.

3. Don’t give every application their own Queue Manager! Sometimes yes, sometimes no. Understand how to analyze what is best for your needs. Optimize your messaging architecture for shared messaging to control infrastructure costs and optimize operational support resources.

4. Don’t fall out of support. While TxMQ can offer support for OUT of support products, it’s costly to let support lapse on current projects, and even more so if you have to play catch up!

5. Don’t forget monitoring! MQ is powerful and stable, but if a problem occurs, you want to know about it right away. Don’t wait until your XMITQs and application queues fill up and bring the Queue Manager down before you respond!

In the cloud, on mobile, on-prem or in the IoT, IBM MQ simplifies, accelerates, and facilitates security-rich data exchange. Keep your MQ running smoothly, reach out to talk with one of our Subject Matter Experts today!

If you like our content and would like to keep up to date with TxMQ don’t forget to sign up for our monthly newsletter.

 

Contemplations of Modernizing Legacy Enterprise Technology

What should you think about when modernizing your legacy systems?

Updating and replacing legacy technology takes a lot of planning and consideration. It can take years for a plan to become fully realized. Often poor choices during the initial planning process can destabilize your entire system, and it’s not unheard of for shoddy strategic technology planning to put an entire organization out of business.

At TxMQ we play a major role in helping our clients plan, integrate, and maintain legacy and hybrid systems. I’ve outlined a few areas to think about in the course of planning, (or in some cases re-planning) your modernization of legacy systems.

1.The true and total cost of maintenance
2.Utilize technology that integrates well with other technologies and systems
3. Take your customer’s journey into consideration
4. Ensure that Technical Debt doesn’t become compounded
5. Focus on fixing substantiated validated issues
6. Avoid technology lock-in

1. The true and total cost of maintenance

Your ultimate goal may be to replace the entire system but taking that first step typically means making the move to a hybrid environment.

Hybrid Environments utilize multiple systems and technologies for various processes, and can be extremely effective, but difficult to manage by yourself. If you are a large corporation with seemingly endless resources and have an agile staff with an array of skill sets, then you may be prepared. However, the reality is most IT departments are on a tight budget, people are multitasking and working far more than 40 hours a week just to maintain current systems.

These days most IT Departments just don’t have the resources. This is why so many organizations are moving to Managed IT Services to help mitigate the costs, take back some time, and are becoming more agile in the process.

When you’re deciding to modernize your old legacy systems you have to take into consideration the actual cost of maintaining multiple technologies. As new tech enters the marketplace, and older technologies and applications are moving towards retirement, so are the people who historically managed those technologies for you. It’s nearly impossible today to find a person willing to put time into learning technology that’s on it’s last leg. It’s a waste of time for them, and will be a huge drain on time and economical resources for you. It’s like learning how to fix a steam engine over learning more modern electric engines. I’m sure it’s a fun hobby but it it will probably never pay the bills.

You can’t expect newer IT talent to accept work that means refining and utilizing skills that will soon no longer be needed, unless you’re willing to pay them a hefty sum not to care. Even then, it’s just a short term answer and don’t expect them to stick around for long, always have a back up plan. Also, it’s always good to have someone on call that can help in a pinch and provide needed fractional IT Support.

2. Utilize technology that integrates well with other technologies and systems.

Unless you’re looking to rip and replace your entire system, you need to ensure that the new plays well with the old. Spoiler alert!, different technologies and systems often don’t play well together.

When you thought you’ve found that missing piece of software that seems to fix all the problems your business leaders insist they need, you will find that integrating it into your existing technology stack is much more complicated than you expected. If you’re going at this alone please take into consideration when planning a project, two disparate pieces of technology often act like two only children playing together. Sure they might get along for a bit but as soon as you turn your back there’s going to be a miscommunication and they start fighting.

Take time in finding someone with expertise in integrations, preferably a consultant or partner with plenty of resources and experience in integrating heterogeneous systems.

3. Take your customer’s journey into consideration

The principal reason any business should contemplate upgrading legacy technology is that it improves on the customer experience. Many organizations make decisions based on how it will increase profit and revenue without taking into consideration how profit and revenue are made.

If you have an established customer base, improving their experience should be a top priority because they require minimal effort to retain. However, no matter how superior your services or products are if there is a process with a smoother customer experience you can be sure that your long time customer will move on. As humans, we almost always take the path of least resistance. If you can improve even one aspect of a customer’s journey you have chosen wisely.

4. Ensure that Technical Debt doesn’t become compounded


Technical Debt is the idea that choosing the simple low cost solution will most certainly ensure you pay a higher price in the long run. The more you choose this option, the more likely you are to increase this debt, and in the long run you will be paying more, with interest.

Unfortunately, this is one of the most common mistakes when undertaking a legacy upgrade project. This is where being frugal will not pay off. If you can convince the decision makers and powers that be of one thing, it should be to not choose exclusively based on lower upfront costs. You must take into account the total cost of ownership. If you’re going to take time and considerable effort to do something you should always make sure it’s done right the first time, or it could end up costing a lot more.

5. Focus on fixing substantiated validated issues

It’s not often, but sometimes when a new technology comes along, like blockchain for instance, we become so enamored by the possibilities that we forget to ask, do we need it?

It’s like having a hammer in hand and running around looking for something to nail. Great you have the tool, but if there’s not an obvious problem to fix it with then it’s just a status symbol and that doesn’t get you very far in business. There is no cure-all technology. You need outline what problems you have, then prioritize to find the right technology that best suits your needs. Problem first, then Solution.

6. Avoid technology and Vendor lock-in

After you’ve defined what processes you need to modernize be very mindful when choosing the right technology and vendor to fix that problem. Vendor lock-in is serious and has been the bane of many technology leaders. If you make the wrong choice here, it could end up costing you substantially to make a switch. Even more than the initial project itself.

A good tip here is to look into what competitors are doing. You don’t have to copy what everyone else is doing, but to remain competitive you have to at least be as good as your competitors. Take the time to understand and research all of the technologies and vendors available to you, and ensure your consultant has a good grasp on how to plan your project taking vendor lock-in into account.

Next Steps:

Modernizing a major legacy system may be one of the biggest projects your IT department has ever taken on. There are many aspects to consider and no two environments are exactly the same, but it’s by no means an impossible task. It has been done before, and thankfully you can draw from these experiences.

The best suggestion I can give, is to have experience available to help guide you through this process. If that’s not accessible within your current team, find a consultant or partner that has the needed experience so you don’t have to worry about making the wrong choices and end up creating a bigger issue than you had in the first place.

At TxMQ we’ve been helping businesses assess, plan, implement, and manage desperate systems for 39 years. If you need any help or have any question please reach-out today. We would love to hear from you.

Managed Services: Regain focus, improve performance and move forward.

Managed IT Services Help You Improve Your Team’s Performance.

There is no such thing as multitasking. It’s been scientifically proven that if you spread your focus too thin, something is going to suffer. Just like a computer system, if you are trying to work on too many tasks at once, it’s going to slow down, create errors and just not perform as expected.

You must regain your focus in order to improve performance and move forward.

The fact is, no matter what your industry or business is today, your success is dependent upon your IT team staying current, competent, and competitive within the industry. Right now there is more on your IT departments “plate” than ever before. Think about how much brain power and talent you’re misusing knowing that your best IT talent is spending the bulk of their efforts just managing the day to day. Keep in mind that most of these issues can be easily fixed, and even avoided with proper preparation.

How do you continuously put out fires, keep systems running smoothly, and still have time to plan for the future?

As the legendary Ron Swanson once said “Never half-ass two things. Whole-ass one thing.”
Ron Swanson Yep

Don’t go at it alone when you can supplement your team and resources.

Overworked employees have higher turnover (-$), make more mistakes (-$), and work slower (-$)(-$). This is costing you more than you can ever fully account for, though your CFO may try. Think about those IT stars that are so important to the success of your business. You may just lose them, starting another fire to put out when you’re trying to gain, train, and grow.

Managed IT Services Can Put You Back on Track!

No one knows your business better than you, and I’m just guessing but, I bet you’ve gotten pretty good at it by now. However, as any good leader or manager knows, without your focus on the future you could lose out as your industry changes, and if you didn’t notice it’s already changing.

To remain competitive, you need an advantage that can help you refocus your team and let you do you, because that’s what you do best.

At TxMQ we are not an expert in your business, and we would never claim to be one. Our Managed Services mission is to take care of the stuff you don’t have the resources, expertise, or the time for, and then we make it run at it’s best. You can refocus your full attention to improving your business.
Whether your producing widgets to change the world, a life saving drug, or providing healthy food for the masses, you don’t have to spread yourself thin. We Monitor, Manage and Maintain, Middleware Systems & Databases that power your business. As a provider we are technology and systems agnostic.
What we do is nothing you can’t do yourself or maybe, already are doing. If resources are scarce, putting extra work on your existing team can cost you more than it needs to. TxMQ’s Managed Services teams fill in the gaps within your existing IT resources to strengthen and solidify your systems, so that you can focus on everything else.

TxMQ’s Managed Services team helps you refocus, so can concentrate on growth and tackling your industry and business challenges.

If you’re interested in getting more sleep at night and learning more about our Managed Services please reach out or click below for more info.

Learn About Managed Services and Support With TxMQ Click Here!

North America: Don’t Ignore GDPR – It Affects us too!

Hey, North America – GDPR Means Us, Too!

It’s well documented, and fairly well socialized across North America that on May 25th of 2018, the GDPR, or the General Data Protection Regulation, formally goes into effect in the European Union (EU).
Perhaps less well known, is how corporations located in North America, and around the world, are actually impacted by the legislation.

The broad stroke is, if your business transacts with and/or markets to citizens of the EU, the rules of GDPR apply to you.

For those North American-based businesses that have mature information security programs in place (such as those following PCI, HIPAA, NIST and ISO standards), your path to compliance with the GDPR should not be terribly long. There will be, however, some added steps needed to meet the EU’s new requirements; steps that this blog is not designed to enumerate, nor counsel on.
It’s safe to say that data protection and privacy is a concern involving a combination of legal, governance, process, and technical considerations. Here is an interesting and helpful FAQ link on the General Data Protection Regulation policies.
Most of my customers represent enterprise organizations, which have a far-reaching base of clients and trading partners. They are the kinds of companies who touch sensitive information, are acutely aware of data security, and are likely to be impacted by the GDPR.
These enterprises leverage TxMQ for, among other things, expertise around Integration Technology and Application Infrastructure.
Internal and external system access and integration points are areas where immediate steps can be taken to enhance data protection and security.

Critical technical and procedural components include (but are not limited to):

  • Enterprise Gateways
  • ESB’s and Messaging (including MQ and FTP – also see Leif Davidsen’s blog)
  • Application & Web Servers
  • API Management Strategy and Solutions
  • Technology Lifecycle Management
    • Change Management
    • Patch Management
    • Asset Management

The right technology investment, architecture, configuration, and governance model go a long way towards GDPR compliance.
Tech industry best practices should be addressed through a living program within any corporate entity. In the long run, setting and adhering to these policies protect your business, and save your business money (through compliance and efficiency).
In short, GDPR has given North America another important reason to improve upon our data and information security.
It affects us, and what’s more, it’s just a good idea.

What you need to know about GDPR

What is GDPR?

GDPR is the European Union’s General Data Protection Regulation.
In short, it is known as the ‘right to be forgotten’ rule. The intent of GDPR is to protect the data privacy of European Union (or EU) citizens, yet it’s implications are potentially far reaching.

Why do EU citizens need GDPR?

In most of the civilized world, individuals have little true awareness of the amount of data that is stored about us. Some accurate, some quite the opposite.

Personal data is defined by both the directive and GDPR as information relating to a person who can be identified directly or indirectly in particular by reference to name, ID number, location data, or other factors related to physical, physiological, mental, economic, cultural, or related factors (including social identity).

If I find an error strewn rant about my small business somewhere online, my ability to correct it, or even have it removed is limited quite completely to posting a counter statement or begging whoever owns that content in question, to remove it. I have no real legal recourse short of a costly, and destined-to-fail law suit.
The EU sought to change this for their citizens, and thus GDPR was born.
In December of 2015, the long process of designing legislation to create a new legal framework to ensure the rights of EU citizens was completed. This was ratified a year later and becomes enforceable on May 25th of this year (2018).

There are two primary components to the GDPR legislation.

  1. The General Data Protection Regulation, or GDPR, is designed to enable individuals to have more control of their personal data.

It is hoped that these modernized and unified rules will allow companies to make the most of digital markets by reducing regulations, while regaining consumers trust.

  1. The data protection directive is a second component.

It ensures that law enforcement bodies can protect the rights of those involved in criminal proceedings. Including victims, witnesses, and other parties.

It is also hoped that the unified legislation will facilitate better cross border participation of law enforcement to proactively enforce the laws, while facilitating better capabilities of prosecutors to combat criminal and terrorist activities.

Key components of GDPR

The regulation is intended to establish a single set of cross European rules, designed to make it simpler to do business across the EU.  Organizations across the EU are subject to regulation just by collecting data on EU citizens.

Personal Data

Personal data is defined by both the directive and GDPR as information relating to a person who can be identified directly or indirectly in particular by reference to name, ID number, location data, or other factors related to physical, physiological, mental, economic, cultural, or related factors (including social identity).
So, this means many things including IP addresses, cookies, and more will be regarded as personal data if they can be linked back to an individual.
The regulations separate the responsibilities and duties of data controllers vs data processors, obligating controllers to engage only those processors that provide “sufficient guarantees to implement appropriate technical and organizational measures” to meet the regulations requirements and protect data subjects’ rights.
Controllers and processors are required to “implement appropriate technical and organizational measures” taking into account “the state of the art and costs of implementation” and “the nature, scope, context and purposes of the processing as well as the risk of varying likelihood and severity for the rights and freedoms of individuals”.

Security actions “appropriate to the risk”

The regulations also provide specific suggestions for what kinds of security actions might be considered “appropriate to the risk”, including:

  • The pseudonymization and/or encryption of personal data.
  • The ability to ensure the ongoing confidentiality, integrity, availability, and resilience of systems and services processing persona data.
  • The ability to restore the availability and access to data in a timely manner in the event of a physical or technical incident.
  • A process for regularly testing, assessing and evaluating the effectiveness of technical and organizational measures for ensuring the security of the processing.

Controllers and processors that adhere to either an approved code of conduct or an approved certification may use these tools to demonstrate their compliance (such as certain industry-wide accepted tools).
The controller-processor relationships must be documented and managed with contracts that mandate privacy obligations.

Enforcement and Penalties

There are substantial penalties and fines for organizations that fail to conform with the regulations.
Regulators will now have the authority to issue penalties equal to the greater of 10 Million Euro, or 2% of the entity’s global gross revenue for violations of record keeping, security, breach notifications and privacy impact assessment obligations. However, violations of obligations related to legal justification for processing (including consent), data subject rights, and cross border data transfers, may result in double the above stipulated penalties.
It remains to be seen how the legal authorities tasked with this compliance will perform.

Data Protection Officers

Data Protection Officers must be appointed for all public authorities, and where the core activities of the controller or the processor involve “regular and systematic monitoring of data subjects on a large scale”, or where the entity conducts large scale processing of “special categories of personal data”; personal data such as that revealing racial or ethnic origin, political opinions, religious belief, etc. This likely encapsulates large firms such as banks, Google, Facebook, and the like.
It should be noted that there is also NO restriction on organization size, down to small start-up firms.

Privacy Management

Organizations will have to think harder about privacy. The regulations mandate a risk-based approach, where appropriate organization controls must be developed according to the degree of risk associated with the processing activities.
Where appropriate, privacy impact assessments must be made, with the focus on individual rights.
Privacy friendly techniques like pseudonymization will be encouraged to reap the benefits of big data innovation while protecting privacy.
There is also an increased focus on record keeping for controllers as well.

Consent

Consent is a newly defined term in the regulations.
It means “any freely given, specific informed and unambiguous indication of his or her wishes by which the data subject, either by a statement or by clear affirmative action, signifies agreement to personal data relating to them being processed”. The consent does need to be for specified, explicit, and legitimate purposes.
Consent should also be demonstrable. Withdrawal of consent must be clear, and as easy to execute as the initial act of providing consent.

Profiling

Profiling is now defined as any automated processing of personal data to determine certain criteria about a person.

In particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, behaviors, location and more”.

This will certainly impact marketers, as it appears that consent must be explicitly provided for said activities.
There is more, including details on breach notification.
It’s important to note that willful destruction of data is dealt with as severely as a breach.

Data Subject Access Requests

Individuals will have more information how their data is processed, and this information must be available in a clear and understandable way.
If said requests are deemed excessive, providers may be able to charge for said information.

Right to be Forgotten

This area, while much written about, will require some further clarification, as there are invariably downstream implications the regulations haven’t begun to address. Yet the intent of “right to be forgotten” is clear; individuals have certain rights, and they are protected.

Think you’re ready for GDPR?

Is your business really ready for GDPR? What measures have you taken to ensure you’re in compliance?
With the GDPR taking effect this coming May, companies around the world have a long, potentially costly, road ahead of them to demonstrate that they are worthy of the trust that so many individuals place in them.

The Case for the Crypto Crash

The Case for the Crypto Crash

Anyone who knows me knows that I am a big supporter of, and believer in distributed ledger technology, or DLT.
It has the potential to revolutionize the way we build certain types of solutions, and the way certain systems, software, and institutions, even people, interact.
That being said, I also believe a strong argument can be made that crypto currencies, at least in their current incarnation, are destined to fail.
Full disclosure: I own no crypto currencies.
There is a foundational flaw in the use case of cryptocurrency. It is NOT easily transacted; often having lengthy or ungainly settlement times, requiring in almost all cases, conversion to fiat currency, and it’s generally ill suited to the very task it was designed to perform: storing value for transacting business.

It’s hard for people to use crypto currencies.

I have heard first hand, countless stories of transactions intended to be conducted using crypto currencies, where the parties wouldn’t agree to them without one or the other agreeing to guarantee value to some fiat currency.
If this continues and people aren’t able to use cryptocurrency as a currency, what then? Normal market rules would dictate a crash to zero.
But is this a market that follows normal market rules? What exactly is normal?

Fiat Currency, or Fiat Money:

Let’s back up and look at fiat currency. Take the US dollar.
Early on, the United States dollar was tied to the gold standard. The United States Bullion Depository, more commonly known as Fort Knox, was established as a store in the US for gold that backed the dollar.
In 1933, the US effectively abandoned this standard, and in 1971, all ties to gold were severed.
So what happened? Effectively nothing. Yet the US backs the dollar.
Why? The US dollar is intimately tied to our financial system by the Federal Reserve, which, as demonstrated for better or worse in the financial crisis of 10 years ago, will do everything in its power to shore up the currency when needed.
So we operate today with the shared belief, some might call it a shared delusion, that there is value in the money we carry in our wallets and in the bank statements we see online.

Is cryptocurrency approaching this level of shared belief?

Who will step in if crypto crashes? In short, no one. There is no governing body, by design, behind any cryptocurrency.
As I write this, all crypto currencies are down over 10%, some are down over 20%. Nothing will prop it back up other than buyers: speculators hoping to buy on the downswing, hoping to hold until it rises again.
So, is this a normal market? I say no, it is not. I see no ultimate destination on this journey, other than disappointment.
If you have risk capital to play with, go ahead, risk some on crypto if you wish.
Personally, I would rather invest my money in companies I can understand, with business models that make sense. That being said, in my case, this also means investing in my company’s work to build solutions on the technology underlying crypto currency, or distributed ledger technology.

You may be asking yourself, how can he support distributed ledger technology and not have faith in cryptocurrency?

The answer here is simple. The technology is solid, the use case of crypto is flawed. Java is solid, but not all java applications are good applications. Crytpo currency is just another application running on distributed ledger, and as I have posited herein, a bad one.


Chuck Fried is the President and CEO of TxMQ, a systems integrator and software development company focused on building technical solutions for their customers across the US and Canada.
Follow him at www.chuckfried.com, or @chuckfried on Twitter, or here on the TxMQ blog.

Economic Theory and Cryptocurrency

This post was originally published on ChuckFried.com, with permission to repost on TxMQ.com.

Economic Theory and Cryptocurrency

In a rational market, there are basic principles, which apply to the pricing and availability of goods and services. At the same time, these forces affect the value of currency. Currency is any commodity or item whose principle use is as a store of value.
Once upon a time, precious metals and gems were the principle value store used. Precious jewels, gold, and silver were used as currency to acquire goods and services. Over time, as nations industrialized, trading required proxy value stores, and paper money was introduced, which was tied to what became the gold standard. This system lasted into the 20th century.
As nations moved off the gold standard, Keynesian economics became a much-touted model. Introduced by John Maynard Keynes, a British economic theorist in his seminal, depression era work “The General Theory of Employment, Interest and Money”, it introduced a demand side model whereby nations were shown to have the ability to influence macro economics by modifying taxes and government spending.
Recently, crypto currency has thrown a curveball into our economic models, with the introduction of virtual currencies. Bitcoin is the most widely known, but there are multiple other virtual currencies or crypto currencies as they are now called because of the underlying mathematical formulas and crypto graphic algorithms which govern the network these are built on.

Whether these are currencies or not is itself an interesting rabbit hole to climb down, and a bit of a semantic trap.

They are not stores of value, nor proxies for precious goods, but if party a perceives a value in a bitcoin, and will take it in trade for something, does that not make it a currency?
Webster’s defines currency as circulation as a medium of exchange, and general use, acceptance or prevalence. Bitcoin seems to fit this definition.
Thus the next question…

What is going on with the price of bitcoin?

Through most of 2015, the price of one bitcoin started a slow climb from the high 200s to the mid 400s in US dollars; that in itself is a near meteoric climb. The run ended at around $423, for reasons outside the scope of this paper, actual pricing is dependent on the exchange one references for this data.
2016 saw an acceleration of this climb, with a final tally just shy of $900.
It was in 2017 where the wheels really came off, with a feverish, near euphoric climb in the past weeks to almost $20,000, before settling recently to a trading range of $15-$16,000 per bitcoin.
So what is going on here? What economic theory describes this phenomenon?
Sadly, we don’t have a good answer, but there are some data points we should review.
First, let’s recognize that for many readers, awareness of bitcoin happened only recently. It bears pointing out that one won’t buy a thing if one is unaware of that thing. Thus, the awareness of bitcoin has played a somewhat significant role in driving up it’s value.
To what extent this affected the price is a mystery, but if we accept this as given, clearly as more and more people learn about bitcoin, more and more people will buy bitcoin.

So what is bitcoin?

I won’t go deep here since it’s likely if you are reading this, you have this foundational knowledge, but bitcoin was created by a person, persons, or group using the pseudonym Satoshi Nakomoto in 2009. It was created to eliminate the need for banks, or third parties in transactions; it also allows for complete anonymity of the holder of the coin.
There is a finite upper limit of the maximum number of bitcoins that can ever be created. There is a mathematical formula described in detail in various online sources, including Wikipedia, so I won’t delve into that here. This cap, set at 21 million coins, will be reached when the last coin is mined (again, see Wikipedia). This is variously estimated to likely occur in the year 2140.

What makes Bitcoin Valuable?

So this ‘capped’ reality also adds to value, since like most stores of value, there is a rarity to bitcoin, a fixed number in existence today, and a maximum number that will ever exist.
In addition, more and more organizations are accepting bitcoin as a payment method. This increase in utility, and subsequent liquidity (it’s not always easy to sell units of bitcoin less than full coins) has also increased the perceived value of the coin.
Contributing to this climb in value recently has been the CryptoKitties phenomenon; a gaming application that rose to popularity far more rapidly than its creators could have foreseen. The subsequent media exposure thrust blockchain, and correspondingly bitcoin, further into the limelight, and the value continued to spike.
Lastly, the CBOE Options Exchange announced that on Monday, December 11th, they will begin trading bitcoin futures. Once again, this action broadcast to a widening audience that bitcoin was real, viable, and worth looking at as a part of some portfolios.; adding both legitimacy, as well as ease of trade to the mix.
The number of prognosticators calling bitcoin a farce seems near equal to the number calling for a coin to hit a $1 million valuation in 4 years. Who will be right remains to be seen.
For the moment, this author sees this as a bit like Vegas gambling. It’s fun, it’s legal, but you can also lose every penny you gamble; so bet (invest) only what you can afford to lose, and enjoy the ride.

What Digital Cats Taught Us About Blockchain

Given the number of cat pictures that the internet serves up every day, perhaps we shouldn’t be surprised that blockchain’s latest pressure-test involves digital cats. CryptoKitties is a Pokémon-style collecting and trading game built on Ethereum where players buy, sell, and breed digital cats. In a matter of a week, the game has gone from a relatively obscure but novel decentralized application (DAPP) to the largest DAPP currently running on Ethereum. Depending on when you sampled throughput, CryptoKitties accounted for somewhere in the neighborhood of 14% of Ethereum’s overall transaction volume. At the time I wrote this, players had spent over $5.2 million in Ether buying digital cats. The other day, a single kitty was sold for over $117,000.

Wednesday morning I attended a local blockchain meet-up, and the topic was CryptoKitties.

Congestion on the Ethereum node that the player was connected to was so bad, gas fees for buying a kitty could be as high as $100. The node was so busy that game performance was significantly degraded to the point where the game became unusable. Prior to the game’s launch, pending transaction volume on Ethereum was under 2,000 transactions. Now it’s in the range of 10,000-12,000 transactions. To summarize: A game where people pay (lots of) real money to trade digital cats is degrading the performance of the world’s most viable general-purpose public blockchain.

If you’re someone who has been evaluating the potential of blockchain for enterprise use, that sounds pretty scary. However, most of what has been illustrated by the CryptoKitties phenomenon isn’t news. We already knew scalability was a challenge for blockchain. There are a proliferation of off-chain and side-chain protocols emerging to mitigate these challenges, as well as projects like IOTA and Swirlds which aim to provide better throughput and scalability by changing how the network communicates and reaches consensus. Work is ongoing to advance the state of the art, but we’re not there yet and nobody has a crystal ball.

So, what are the key takeaways from the CryptoKitties phenomenon?

Economics Aren’t Enough to Manage the Network

Put simplistically, as the cost of trading digital cats rises, the amount of digital cat trading should go down (in an idealized, rational market economy that is). Yet both the cost of the kitties themselves – currently anywhere from $20 to over $100,000 – and the gas cost required to buy, sell, and breed kitties has gone up to absurd levels. The developers of the game have also increased fees in a bid to slow down trading. Up to now, nothing has worked.

In many ways, it’s an interesting illustration of cryptocurrency in general: cats have value because people believe they do, and the value of a cat is simply determined by how much people are willing to pay for it. In addition, this is clearly not an optimized, nor ideal, nor rational market economy.

The knock-on effects for the network as a whole aren’t clear either. Basic economics would dictate that as a resource becomes more scarce, those who control that resource will charge more for it. On Ethereum, that could come in the form of gas limit increases by miners which will put upward pressure on the cost of running transactions on the Ethereum network in general.

For businesses looking to leverage public blockchains, the implication is that the risk of transacting business on public blockchains increases. The idea that a CryptoKitties can come along and impact the costs of doing business adds another wrinkle to the economics of transacting on the blockchain. Instability in the markets for cryptocurrency already make it difficult to predict the costs of operation for distributed applications. Competition between consumers for limited processing power will only serve to increase risk and likely the cost of running on public blockchains.

Simplify, and Add Lightness

Interestingly, the open and decentralized nature of blockchains seems to be working against a solution to the problem of network monopolization. Aside from economic disincentives, there isn’t a method for ensuring that the network isn’t overwhelmed by a single application or set of applications. There isn’t much incentive for applications to be good citizens when the costs can be passed on to end-users who are willing to absorb those costs.

If you’re an enterprise looking to transact on a public chain, your mitigation strategy is both obvious and counter-intuitive: Use the blockchain as little as possible. Structure your smart contracts to be as simple as they can be, and handle as much as you can either in application logic or off-chain. Building applications that are designed to be inexpensive to run will only pay off in a possible future where the cost of transacting increases. Use the right tools for the job, do what you can off-chain, and settle to the chain when necessary.

Private Blockchains for Enterprise Applications

The easiest way to assert control over your DAPPs are to deploy them to a network you control. In the enterprise, the trustless, censorship-free aspects of the public blockchain are much less relevant. Deploying to private blockchains like Hyperledger or Quorum (a permissioned variant of Ethereum), gives organizations a measure of control over the network and its participants. Your platform then exists to support your application, and your application can be structured to manage the performance issues associated with blockchain platforms.

Even when the infrastructure is under the direct control of the enterprise, it’s still important to follow the architectural best practices for DAPP development. Use the blockchain only when necessary, keep your smart contracts as simple as possible, and handle as much as you can off-chain. In contrast to traditional distributed computing environments, scaling a distributed ledger platform by adding nodes increases fault tolerance and data security but not performance. Structuring your smart contracts to be as efficient as possible will ensure that you make best use of transaction bandwidth as usage of an application scales.

Emerging Solutions

Solving for scalability is an area of active development. I’ve already touched on solutions which move processing off-chain. Development on the existing platforms is also continuing, with a focus on the mechanism used to achieve consensus. Ethereum’s Casper network proposes to change the consensus mechanism to a proof-of-stake system, where miners put up an amount of cryptocurrency as proof that they aren’t acting maliciously. While proof-of-stake has the potential to increase throughput, it hasn’t yet been proven to be.

Platforms built on alternatives to mining are also emerging.

IOTA has been gaining traction as an Internet of Things scale solution for peer-to-peer transacting. It has the backing of a number of large enterprises including Microsoft, is open-source, and freely available. IOTA uses a directed acyclic graph as its core data structure, which differs from a blockchain and allows the network to reach consensus much more quickly. Swirlds is coming to market with a solution based on the Hashgraph data structure. Similar to IOTA, this structure allows for much faster time to consensus and higher transaction throughput. In contrast to IOTA, Swirlds is leaderless and Byzantine fault tolerant.

As with any emerging technology, disruption within the space can happen at a fast pace. Over the next 18 months, I expect blockchain and distributed ledger technology to continue to mature. There will be winners and losers along the way, and it’s entirely possible that new platforms will supplant existing leaders in the space.

Walk Before You Run

Distributed ledger technology is an immature space. There are undeniable opportunities for early adopters, but there are also pitfalls – both technological and organizational. For organizations evaluating distributed ledger, it is important to start small, iterate often, and fail fast. Your application roadmap needs to incorporate these tenets if it is to be successful. Utilize proofs of concept to validate assumptions and drive out the technological and organizational roadblocks that need to be addressed for a successful production application. Iterate as the technology matures. Undertake pilot programs to test production readiness, and carefully plan application roll out to manage go-live and production scale.

If your organization hasn’t fully embraced agile methods for application development, now is the time to make the leap. The waterfall model of rigorous requirements, volumes of documentation, and strictly defined timelines simply won’t be flexible enough to successfully deliver products on an emerging technology. If your IT department hasn’t begun to embrace a DevOps-centric approach, then deploying DAPPs is likely to meet internal resistance – especially on a public chain. In larger enterprises, governance policies may need to be reviewed and updated for applications based on distributed ledger.

The Future Is Still Bright

Despite the stresses placed on the Ethereum network by an explosion of digital cats, the future continues to look bright for distributed ledger and blockchain. Flaws in blockchain technology have been exposed somewhat glaringly, but for the most part these flaws were known before the CryptoKitties phenomenon. Solutions to these issues were under development before digital cats. The price of Ether hasn’t crashed, and the platform is demonstrating some degree of resilience under pressure.

We continue to see incredible potential in the space for organizations of all sizes. New business models will continue to be enabled by distributed ledger and tokenization. The future is still bright – and filled with cats!

 

IBM WebSphere Application Server (WAS) v.7 & v.8, and WebSphere MQ v.7.5 End of Support: April 30, 2018

Are you presently running on WAS versions 7 or 8?
   Are you leveraging WebSphere MQ version 7.5?

Time is running out, IBM WebSphere Application Server (WAS) v.7 & v.8, and WebSphere MQ v.7.5 support ends in less than 6 months. As of April 30th 2018, IBM will discontinue support on all WebSphere Application Server versions 7.0.x & v8.0.x; and WebSphere MQ v7.5.x.

It’s recommended that you migrate to WebSphere Application Server v.9 to avoid potential security issues that may occur on the early, unsupported versions of WAS (and Java).
It’s also recommended that you upgrade to IBM MQ version 9.0.x, to leverage new features, and avoid costly premium support fees from IBM.

Why should you go through an upgrade?

Many security risks can percolate when running back-level software, especially WAS running on older Java versions. If you’re currently running on software versions that are out of support, finding the right support team to put out your unexpected fires can be tricky and might just blow the budget.
Upgrading WAS & MQ to supported versions will allow you to tap into new and expanding capabilities, and updated performance enhancements while also protecting yourself from unnecessary, completely avoidable security risks and added support costs.

WebSphere Application Server v.9 Highlights

WebSphere Application Server v.9.0 offers unparalleled functionality to deliver modern applications and services quickly, securely and efficiently.

When you upgrade to v.9.0, you’ll enjoy several upgrade perks including:
  • Java EE 7 compliant architecture.
  • DevOps workflows.
  • Easy connection between your on-prem apps and IBM Bluemix services (including IBM Watson).
  • Container technology that enables greater development and deployment agility.
  • Deployment on Pivotal Cloud Foundry, Azure, Openshift, Amazon Web Services and Bluemix.
  • Ability to provision workloads to IBM cloud (for VMware customers).
  • Enhancements to WebSphere extreme scale that have improved response times and time-to-configuration.

 

IBM MQ v.9.0.4 Highlights

With the latest update moving to MQ V9.0.4, there are even more substantial updates of useful features for IBM MQ, even beyond what came with versions 8 (z/OS) & 8.5.

When you upgrade to v.9.0.4, enhancements include:
  • Additional commands supported as part of the REST API for admin.
  • Availability of a ‘catch-all’ for MQSC commands as part of the REST API for admin.
  • Ability to use a single MQ V9.0.4 Queue Manager as a single point gateway for REST API based admin of other MQ environments including older MQ versions such as MQ V9 LTS and MQ V8.
  • Ability to use MQ V9.0.4 as a proxy for IBM Cloud Product Insights reporting across older deployed versions of MQ.
  • Availability of an enhanced MQ bridge for Salesforce.
  • Initial availability of a new programmatic REST API for messaging applications.

This upgrade cycle also offers you the opportunity to evaluate the MQ Appliance. Talk to TxMQ to see if the MQ Appliance is a good option for your messaging environment.

What's your WebSphere Migration Plan? Let's talk about it!

Why work with an IBM Business Partner to upgrade your IBM Software?

You can choose to work with IBM directly – we can’t (and won’t) stop you – but your budget just might. Working with a premier IBM business partner allows you to accomplish the same task with the same quality, but at a fraction of the price IBM will charge you, with more personal attention and much speedier response times.
Also, IBM business partners are typically niche players, uniquely qualified to assist in your company’s migration planning and execution. They’ll offer you and your company much more customized and consistent attention. Plus, you’ll probably be working with ex-IBMers anyway, who’ve turned in their blue nametags to find greater opportunities working within the business partner network.

There are plenty of things to consider when migrating your software from outdated versions to more current versions.

TxMQ is a premier IBM business partner that works with customers to oversee and manage software migration and upgrade planning. TxMQ subject matter experts are uniquely positioned with relevant experience, allowing them to help a wide range of customers determine the best solution for their migration needs.
Get in touch with us today to discuss your migration and back-level support options. It’s never too late to begin planning and executing your version upgrades.

To check on your IBM Software lifecycle, simply search your product name and version on this IBM page or, give TxMQ a call…