Hedera Hashgraph is a Distributed Ledger Technology (DLT) and consensus algorithm. Even though Hashgraph is actually a DAG (Directed Acyclic Graph) it has been referred to as“Blockchain on Steroids” and “Blockchain 2.0” because it addresses many of the issues most DLT’s are currently facing with broader adoption.
Why do we need another DLT?
Hedera Hashgraph is a fast, fair and secure infrastructure to run Decentralized Applications or DApps. This technology is ridiculously fast, has a high throughput (potential for over one million transactions per second), and is asynchronous Byzantine Fault Tolerant (a-What?), and the only DLT that has a mathematically proven consensus mechanism.
Quick Answer: It works, it’s secure, and fixes the issues that have been holding back Blockchain from becoming a viable enterprise-grade technology since it’s inception.
Want to learn more about Hashgraph and other DLT’s reach out below and let us know. We’d love to talk about how to utilize Distributed Ledger Technologies to disrupt your industry by turning your use-case into a fully functioning Decentralized Application.
In 2017, TxMQ’s Disruptive Technologies Group created Exo – an open-source framework for developing applications using the Swirlds Hashgraph consensus SDK. Our intention for Exo was to make it easier for Java developers to build Hashgraph and other DLT applications. It provided an application architecture for organizing business logic, and an integration layer over REST or Java sockets. Version 2 of the framework introduced a pipeline model for processing transactions and the ability to monitor transactions throughout their life cycle over web sockets.
TxMQ used Exo as the basis of work we’ve delivered for our customers, as well as the foundation for a number of proofs-of-concept. As the framework has continued to mature, we began to realize its potential as the backbone for a private distributed ledger platform.
By keeping the programming and integration model consistent, we are able to offer a configurable platform that is friendlier to enterprise developers who don’t come from a distributed ledger background. We wanted developers to be able to maximize the investment they’ve made in the skills they already have, instead of having to tackle the considerable learning curve associated with new languages and environments.
Enterprises, like developers, also require approachability – though from a different perspective. Enterprise IT is an ecosystem in which any number of applications, databases, APIs, and clients are interconnected. For enterprises, distributed ledger is another tool that needs to live in the ecosystem. In order for DLT to succeed in an enterprise setting, it needs to be integrated into the existing ecosystem. It needs to be manageable in a way that fits with how enterprise IT manages infrastructure. From the developer writing the first line of code for their new DApp all the way down to the team that manages the deployment and maintenance of that DApp, everyone needs tooling to help them come to grips with this new thing called DLT. And so the idea for Aviator was born!
Aviator is an application platform and toolset for developing DLT applications. We like to say that it is enterprise-focused yet startup-ready. We want to enable the development of private ledger applications that sit comfortably in enterprise IT environments, flattening the learning curve for everyone involved.
There are three components of Aviator: The core platform, developer tools, and management tools.
Think of the core platform as an application server for enterprise DApps. It hosts your APIs, runs your business logic, handles security, and holds your application data. Each of those components is meant to be configurable so Aviator can work with the infrastructure and skill-sets you already have. We’ll be able to integrate with any Hibernate-supported relational database, plus NoSQL datastores like MongoDB or CouchDB. We’ll be delivering smart contract engines for languages commonly used in enterprise development, like Javascript, Java, and C#. Don’t worry if you’re a Solidity or Python developer, we have you on our radar too. The core platform will provide a security mechanism based on a public key infrastructure, which can be integrated into your organization’s directory-based security scheme or PKI if one is already in place. We can even tailor the consensus mechanism to the needs of an application or enterprise.
Developing and testing DApps can be complicated, especially when those applications are integrated into larger architectures. You’re likely designing and developing client code, an API layer, business logic, and persistence. You’re also likely writing a lot of boilerplate code. Debugging an application in a complicated environment can also be very challenging. Aviator developer tools help to address these challenges. Aviator can generate a lot of your code from Open API (Swagger) documents in a way that’s designed to work seamlessly with the platform. This frees developers to concentrate on the important parts and cuts down on the number of bugs introduced through hand-coding. We’ve got tools to help you deploy and test smart contracts and more tools to help you look at the data and make sure everything is doing what is supposed to do. Finally, we’re working on ways to use those tools the way developers will want to use them, whether that’s through integrations with existing IDEs like Visual Studio Code or Eclipse, or in an Aviator-focused IDE.
The work doesn’t end when the developers have delivered. Deploying and managing development, QA, and production DLT networks is seriously challenging. DLT architectures include a number of components, deployed across a number of physical or virtual machines, scaled across a number of identical nodes. Aviator aims to have IT systems administrators and managers covered there as well. We’re working on a toolset for visually designing your DLT network infrastructure, and a way to automatically deploy that design to your physical or virtual hardware. We’ll be delivering tooling to monitor and manage those assets through our own management tooling, or by integrating into the network management tooling your enterprise may already have. This is an area where even the most mature DLT platforms struggle, and there are exciting opportunities to lower frictions when managing DLT networks through better management capabilities.
So what does this all mean for Exo, the framework that started our remarkable journey? For starters, it’s getting a new name and a new GitHub. Exo has become the Aviator Core Framework, and can now be found on TxMQ’s official GitHub at https://github.com/txmq. TxMQ is committed to maintaining the core framework as a free, open source development framework that anyone can use to develop applications based on Swirlds Hashgraph. The framework is a critical component of the Aviator Platform, and TxMQ will continue to develop and maintain it. There will be a path for applications developed on the framework to be deployed on the Aviator Platform should you decide to take advantage of the platform’s additional capabilities.
For more information on Aviator, please visit our website at http://aviatordlt.com and sign up for updates.
IBM MQ Tips Straight From TxMQ Subject Matter Experts
At TxMQ we have extensive experience helping clients with their IBM MQ deployments including planning, integrations, management, upgrades, & enhancements. Throughout the years we’ve seen just about everything, and we have found there are some common mistakes that are easy to stay away from with a little insight. Here are some a few tips to keep you up and running:IBM MQ is a powerful tool that makes a huge difference in your life every day, but for most of us you only notice it when it’s not working. One small mistake can cause havoc to your entire system.
1. Don’t use MQ as a database. MQ is for moving messages from one application or system to another. Databases are the data repository of choice. MQ resources are most optimized when considering data throughput and message delivery efficiency. IBM MQ is optimized when messages are kept small.
2. Don’t expect assured delivery if you didn’t design for it! Assured one-time message delivery is provided by IBM MQ through setting message persistence and advanced planning in the application integration design process. This plans for poisoned message handling that could otherwise cause issues and failures or worse, unexpected results. Know your business requirements for quality of service for message delivery and make sure your integration design accommodates advanced settings and features as appropriate.
3. Don’t give every application their own Queue Manager! Sometimes yes, sometimes no. Understand how to analyze what is best for your needs. Optimize your messaging architecture for shared messaging to control infrastructure costs and optimize operational support resources.
4. Don’t fall out of support. While TxMQ can offer support for OUT of support products, it’s costly to let support lapse on current projects, and even more so if you have to play catch up!
5. Don’t forget monitoring! MQ is powerful and stable, but if a problem occurs, you want to know about it right away. Don’t wait until your XMITQs and application queues fill up and bring the Queue Manager down before you respond!
In the cloud, on mobile, on-prem or in the IoT, IBM MQ simplifies, accelerates, and facilitates security-rich data exchange. Keep your MQ running smoothly, reach out to talk with one of our Subject Matter Experts today!
If you like our content and would like to keep up to date with TxMQ don’t forget to sign up for our monthly newsletter.
What should you think about when modernizing your legacy systems?
Updating and replacing legacy technology takes a lot of planning and consideration. It can take years for a plan to become fully realized. Often poor choices during the initial planning process can destabilize your entire system, and it’s not unheard of for shoddy strategic technology planning to put an entire organization out of business.
At TxMQ we play a major role in helping our clients plan, integrate, and maintain legacy and hybrid systems. I’ve outlined a few areas to think about in the course of planning, (or in some cases re-planning) your modernization of legacy systems.
1.The true and total cost of maintenance 2.Utilize technology that integrates well with other technologies and systems 3. Take your customer’s journey into consideration 4. Ensure that Technical Debt doesn’t become compounded 5. Focus on fixing substantiated validated issues 6. Avoid technology lock-in
1. The true and total cost of maintenance
Your ultimate goal may be to replace the entire system but taking that first step typically means making the move to a hybrid environment.
Hybrid Environments utilize multiple systems and technologies for various processes, and can be extremely effective, but difficult to manage by yourself. If you are a large corporation with seemingly endless resources and have an agile staff with an array of skill sets, then you may be prepared. However, the reality is most IT departments are on a tight budget, people are multitasking and working far more than 40 hours a week just to maintain current systems.
These days most IT Departments just don’t have the resources. This is why so many organizations are moving to Managed IT Services to help mitigate the costs, take back some time, and are becoming more agile in the process.
When you’re deciding to modernize your old legacy systems you have to take into consideration the actual cost of maintaining multiple technologies. As new tech enters the marketplace, and older technologies and applications are moving towards retirement, so are the people who historically managed those technologies for you. It’s nearly impossible today to find a person willing to put time into learning technology that’s on it’s last leg. It’s a waste of time for them, and will be a huge drain on time and economical resources for you. It’s like learning how to fix a steam engine over learning more modern electric engines. I’m sure it’s a fun hobby but it it will probably never pay the bills.
You can’t expect newer IT talent to accept work that means refining and utilizing skills that will soon no longer be needed, unless you’re willing to pay them a hefty sum not to care. Even then, it’s just a short term answer and don’t expect them to stick around for long, always have a back up plan. Also, it’s always good to have someone on call that can help in a pinch and provide needed fractional IT Support.
2. Utilize technology that integrates well with other technologies and systems.
Unless you’re looking to rip and replace your entire system, you need to ensure that the new plays well with the old. Spoiler alert!, different technologies and systems often don’t play well together.
When you thought you’ve found that missing piece of software that seems to fix all the problems your business leaders insist they need, you will find that integrating it into your existing technology stack is much more complicated than you expected. If you’re going at this alone please take into consideration when planning a project, two disparate pieces of technology often act like two only children playing together. Sure they might get along for a bit but as soon as you turn your back there’s going to be a miscommunication and they start fighting.
Take time in finding someone with expertise in integrations, preferably a consultant or partner with plenty of resources and experience in integrating heterogeneous systems.
3. Take your customer’s journey into consideration
The principal reason any business should contemplate upgrading legacy technology is that it improves on the customer experience. Many organizations make decisions based on how it will increase profit and revenue without taking into consideration how profit and revenue are made.
If you have an established customer base, improving their experience should be a top priority because they require minimal effort to retain. However, no matter how superior your services or products are if there is a process with a smoother customer experience you can be sure that your long time customer will move on. As humans, we almost always take the path of least resistance. If you can improve even one aspect of a customer’s journey you have chosen wisely.
4. Ensure that Technical Debt doesn’t become compounded
Technical Debt is the idea that choosing the simple low cost solution will most certainly ensure you pay a higher price in the long run. The more you choose this option, the more likely you are to increase this debt, and in the long run you will be paying more, with interest.
Unfortunately, this is one of the most common mistakes when undertaking a legacy upgrade project. This is where being frugal will not pay off. If you can convince the decision makers and powers that be of one thing, it should be to not choose exclusively based on lower upfront costs. You must take into account the total cost of ownership. If you’re going to take time and considerable effort to do something you should always make sure it’s done right the first time, or it could end up costing a lot more.
5. Focus on fixing substantiated validated issues
It’s not often, but sometimes when a new technology comes along, like blockchain for instance, we become so enamored by the possibilities that we forget to ask, do we need it?
It’s like having a hammer in hand and running around looking for something to nail. Great you have the tool, but if there’s not an obvious problem to fix it with then it’s just a status symbol and that doesn’t get you very far in business. There is no cure-all technology. You need outline what problems you have, then prioritize to find the right technology that best suits your needs. Problem first, then Solution.
6. Avoid technology and Vendor lock-in
After you’ve defined what processes you need to modernize be very mindful when choosing the right technology and vendor to fix that problem. Vendor lock-in is serious and has been the bane of many technology leaders. If you make the wrong choice here, it could end up costing you substantially to make a switch. Even more than the initial project itself.
A good tip here is to look into what competitors are doing. You don’t have to copy what everyone else is doing, but to remain competitive you have to at least be as good as your competitors. Take the time to understand and research all of the technologies and vendors available to you, and ensure your consultant has a good grasp on how to plan your project taking vendor lock-in into account.
Next Steps:
Modernizing a major legacy system may be one of the biggest projects your IT department has ever taken on. There are many aspects to consider and no two environments are exactly the same, but it’s by no means an impossible task. It has been done before, and thankfully you can draw from these experiences.
The best suggestion I can give, is to have experience available to help guide you through this process. If that’s not accessible within your current team, find a consultant or partner that has the needed experience so you don’t have to worry about making the wrong choices and end up creating a bigger issue than you had in the first place.
At TxMQ we’ve been helping businesses assess, plan, implement, and manage desperate systems for 39 years. If you need any help or have any question please reach-out today. We would love to hear from you.
2018 has been a very exciting year so far for Enterprise Technology. We see the steady migration from legacy systems to cloud and other flexible platforms. The new kids on the block, Blockchain and Distributed Ledger Technologies (DLT), Hyperledger, Ethereum, and Hashgraph have been making huge strides. We now are seeing more ideas and use-cases coming to fruition through POC’s, moving one step closer to fulfilling the promise of DLT as a viable enterprise platform and TxMQ’s Disruptive Technology Group is helping to prove it.
Are you a disruptor or just waiting to be disrupted?
I was recently speaking with an IT Leader of a large corporation talking about the future of Technology…IoT, AI, and of course Blockchain/DLT. As we started to talk about Blockchain I asked him if they had any plans for it, he said: “Yes!, I plan to laugh at the people who think Blockchain will matter in a year.” Of course, he was being sarcastic but I was disturbed by his comment. At TxMQ we believe that emerging technologies should matter to folks who make it their profession. Every IT Leader should understand the benefits of Blockchain, and the potential disruptive power of all newer technologies. New technologies need to be evaluated on how they might change your business and your industry. If you’re a Technology or Business leader in any industry, and you’re not looking to the future you are just waiting to be put out of business.
For all you skeptics out there: at this point, it’s not a question of if, but when Blockchain or another DLT will disrupt your industry and change the way you do business.
It may not help you buy that Lambo you’ve had your eye on, but it will change both your business and industry by streamlining processes, increasing transparency in supply chains and transactions, and even create new revenue streams.
As an IT solutions provider with almost 40 years helping clients large and small maintain IT resources, integrate solutions, and prepare for the future of technology, TxMQ is working harder than ever to ensure that you are equipped for the next evolution of the technology within your industry so you can offer customers the best service, improved products, and remain competitive.
With this in mind, we recently launched our Disruptive Technologies Group led by industry veteran Craig Drabik. Despite years of head-banging to Death Metal groups like Tool, Pantera, and even the occasional Winger album (don’t judge), he has maintained genius technical abilities and an impeccable work ethic that has helped us make groundbreaking achievements in the DLT space.
Currently, TxMQ DTG is working on several revolutionary projects focusing on DLT, creating new ways to track & trace, share confidential information, and process transactions. You may have already heard about our Partnership with Intiva Health. This is a very interesting project which gave us the opportunity to be one of the first Salesforce Developers to replace the data layer within Salesforce.com with Distributed Ledger Technology, leveraging its benefits. With this integration, users are able to securely manage credentials, shortening the time for validation, saving hundreds of hours, and potentially save so much money that even your CFO would blush.
When all is said and done, Distributed Ledger Technologies, when applied correctly to the right use case, can drastically improve the way you do business. While it is true that there is nothing that Blockchain or other DLT’s can do now that “traditional” technologies can’t do, but it may just be the right tool for the process you are looking to improve. If you could use a toaster, why would you use the oven to toast a piece of bread?…with new technologies, new efficiencies are realized.
If you haven’t had a chance yet to learn about DLT, reach out to us or visit us at DTG.TxMQ.com for more info. While it may not be on the Final Exam, you do need to know what this is and how it applies to your business…or you could just wait for the next fresh-faced kid with decent coding skills to take over your industry. No pressure.
Managed IT Services Help You Improve Your Team’s Performance.
There is no such thing as multitasking. It’s been scientifically proven that if you spread your focus too thin, something is going to suffer. Just like a computer system, if you are trying to work on too many tasks at once, it’s going to slow down, create errors and just not perform as expected.
You must regain your focus in order to improve performance and move forward.
The fact is, no matter what your industry or business is today, your success is dependent upon your IT team staying current, competent, and competitive within the industry. Right now there is more on your IT departments “plate” than ever before. Think about how much brain power and talent you’re misusing knowing that your best IT talent is spending the bulk of their efforts just managing the day to day. Keep in mind that most of these issues can be easily fixed, and even avoided with proper preparation.
How do you continuously put out fires, keep systems running smoothly, and still have time to plan for the future?
As the legendary Ron Swanson once said “Never half-ass two things. Whole-ass one thing.”
Ron Swanson Yep
Don’t go at it alone when you can supplement your team and resources.
Overworked employees have higher turnover (-$), make more mistakes (-$), and work slower (-$)(-$). This is costing you more than you can ever fully account for, though your CFO may try. Think about those IT stars that are so important to the success of your business. You may just lose them, starting another fire to put out when you’re trying to gain, train, and grow.
Managed IT Services Can Put You Back on Track!
No one knows your business better than you, and I’m just guessing but, I bet you’ve gotten pretty good at it by now. However, as any good leader or manager knows, without your focus on the future you could lose out as your industry changes, and if you didn’t notice it’s already changing.
To remain competitive, you need an advantage that can help you refocus your team and let you do you, because that’s what you do best.
At TxMQ we are not an expert in your business, and we would never claim to be one. Our Managed Services mission is to take care of the stuff you don’t have the resources, expertise, or the time for, and then we make it run at it’s best. You can refocus your full attention to improving your business.
Whether your producing widgets to change the world, a life saving drug, or providing healthy food for the masses, you don’t have to spread yourself thin. We Monitor, Manage and Maintain, Middleware Systems & Databases that power your business. As a provider we are technology and systems agnostic.
What we do is nothing you can’t do yourself or maybe, already are doing. If resources are scarce, putting extra work on your existing team can cost you more than it needs to. TxMQ’s Managed Services teams fill in the gaps within your existing IT resources to strengthen and solidify your systems, so that you can focus on everything else.
TxMQ’s Managed Services team helps you refocus, so can concentrate on growth and tackling your industry and business challenges.
If you’re interested in getting more sleep at night and learning more about our Managed Services please reach out or click below for more info.
It’s well documented, and fairly well socialized across North America that on May 25th of 2018, the GDPR, or the General Data Protection Regulation, formally goes into effect in the European Union (EU).
Perhaps less well known, is how corporations located in North America, and around the world, are actually impacted by the legislation.
The broad stroke is, if your business transacts with and/or markets to citizens of the EU, the rules of GDPR apply to you.
For those North American-based businesses that have mature information security programs in place (such as those following PCI, HIPAA, NIST and ISO standards), your path to compliance with the GDPR should not be terribly long. There will be, however, some added steps needed to meet the EU’s new requirements; steps that this blog is not designed to enumerate, nor counsel on.
It’s safe to say that data protection and privacy is a concern involving a combination of legal, governance, process, and technical considerations. Here is an interesting and helpful FAQ link on the General Data Protection Regulation policies.
Most of my customers represent enterprise organizations, which have a far-reaching base of clients and trading partners. They are the kinds of companies who touch sensitive information, are acutely aware of data security, and are likely to be impacted by the GDPR.
These enterprises leverage TxMQ for, among other things, expertise around Integration Technology and Application Infrastructure.
Internal and external system access and integration points are areas where immediate steps can be taken to enhance data protection and security.
Critical technical and procedural components include (but are not limited to):
The right technology investment, architecture, configuration, and governance model go a long way towards GDPR compliance.
Tech industry best practices should be addressed through a living program within any corporate entity. In the long run, setting and adhering to these policies protect your business, and save your business money (through compliance and efficiency).
In short, GDPR has given North America another important reason to improve upon our data and information security.
It affects us, and what’s more, it’s just a good idea.
GDPR is the European Union’s General Data Protection Regulation.
In short, it is known as the ‘right to be forgotten’ rule. The intent of GDPR is to protect the data privacy of European Union (or EU) citizens, yet it’s implications are potentially far reaching.
Why do EU citizens need GDPR?
In most of the civilized world, individuals have little true awareness of the amount of data that is stored about us. Some accurate, some quite the opposite.
Personal data is defined by both the directive and GDPR as information relating to a person who can be identified directly or indirectly in particular by reference to name, ID number, location data, or other factors related to physical, physiological, mental, economic, cultural, or related factors (including social identity).
If I find an error strewn rant about my small business somewhere online, my ability to correct it, or even have it removed is limited quite completely to posting a counter statement or begging whoever owns that content in question, to remove it. I have no real legal recourse short of a costly, and destined-to-fail law suit.
The EU sought to change this for their citizens, and thus GDPR was born.
In December of 2015, the long process of designing legislation to create a new legal framework to ensure the rights of EU citizens was completed. This was ratified a year later and becomes enforceable on May 25th of this year (2018).
There are two primary components to the GDPR legislation.
The General Data Protection Regulation, or GDPR, is designed to enable individuals to have more control of their personal data.
It is hoped that these modernized and unified rules will allow companies to make the most of digital markets by reducing regulations, while regaining consumers trust.
The data protection directive is a second component.
It ensures that law enforcement bodies can protect the rights of those involved in criminal proceedings. Including victims, witnesses, and other parties.
It is also hoped that the unified legislation will facilitate better cross border participation of law enforcement to proactively enforce the laws, while facilitating better capabilities of prosecutors to combat criminal and terrorist activities.
Key components of GDPR
The regulation is intended to establish a single set of cross European rules, designed to make it simpler to do business across the EU. Organizations across the EU are subject to regulation just by collecting data on EU citizens.
Personal Data
Personal data is defined by both the directive and GDPR as information relating to a person who can be identified directly or indirectly in particular by reference to name, ID number, location data, or other factors related to physical, physiological, mental, economic, cultural, or related factors (including social identity).
So, this means many things including IP addresses, cookies, and more will be regarded as personal data if they can be linked back to an individual.
The regulations separate the responsibilities and duties of data controllers vs data processors, obligating controllers to engage only those processors that provide “sufficient guarantees to implement appropriate technical and organizational measures” to meet the regulations requirements and protect data subjects’ rights.
Controllers and processors are required to “implement appropriate technical and organizational measures” taking into account “the state of the art and costs of implementation” and “the nature, scope, context and purposes of the processing as well as the risk of varying likelihood and severity for the rights and freedoms of individuals”.
Security actions “appropriate to the risk”
The regulations also provide specific suggestions for what kinds of security actions might be considered “appropriate to the risk”, including:
The pseudonymization and/or encryption of personal data.
The ability to ensure the ongoing confidentiality, integrity, availability, and resilience of systems and services processing persona data.
The ability to restore the availability and access to data in a timely manner in the event of a physical or technical incident.
A process for regularly testing, assessing and evaluating the effectiveness of technical and organizational measures for ensuring the security of the processing.
Controllers and processors that adhere to either an approved code of conduct or an approved certification may use these tools to demonstrate their compliance (such as certain industry-wide accepted tools).
The controller-processor relationships must be documented and managed with contracts that mandate privacy obligations.
Enforcement and Penalties
There are substantial penalties and fines for organizations that fail to conform with the regulations.
Regulators will now have the authority to issue penalties equal to the greater of 10 Million Euro, or 2% of the entity’s global gross revenue for violations of record keeping, security, breach notifications and privacy impact assessment obligations. However, violations of obligations related to legal justification for processing (including consent), data subject rights, and cross border data transfers, may result in double the above stipulated penalties.
It remains to be seen how the legal authorities tasked with this compliance will perform.
Data Protection Officers
Data Protection Officers must be appointed for all public authorities, and where the core activities of the controller or the processor involve “regular and systematic monitoring of data subjects on a large scale”, or where the entity conducts large scale processing of “special categories of personal data”; personal data such as that revealing racial or ethnic origin, political opinions, religious belief, etc. This likely encapsulates large firms such as banks, Google, Facebook, and the like.
It should be noted that there is also NO restriction on organization size, down to small start-up firms.
Privacy Management
Organizations will have to think harder about privacy. The regulations mandate a risk-based approach, where appropriate organization controls must be developed according to the degree of risk associated with the processing activities.
Where appropriate, privacy impact assessments must be made, with the focus on individual rights.
Privacy friendly techniques like pseudonymization will be encouraged to reap the benefits of big data innovation while protecting privacy.
There is also an increased focus on record keeping for controllers as well.
Consent
Consent is a newly defined term in the regulations.
It means “any freely given, specific informed and unambiguous indication of his or her wishes by which the data subject, either by a statement or by clear affirmative action, signifies agreement to personal data relating to them being processed”. The consent does need to be for specified, explicit, and legitimate purposes. Consent should also be demonstrable. Withdrawal of consent must be clear, and as easy to execute as the initial act of providing consent.
Profiling
Profiling is now defined as any automated processing of personal data to determine certain criteria about a person.
“In particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, behaviors, location and more”.
This will certainly impact marketers, as it appears that consent must be explicitly provided for said activities.
There is more, including details on breach notification. It’s important to note that willful destruction of data is dealt with as severely as a breach.
Data Subject Access Requests
Individuals will have more information how their data is processed, and this information must be available in a clear and understandable way.
If said requests are deemed excessive, providers may be able to charge for said information.
Right to be Forgotten
This area, while much written about, will require some further clarification, as there are invariably downstream implications the regulations haven’t begun to address. Yet the intent of “right to be forgotten” is clear; individuals have certain rights, and they are protected.
Think you’re ready for GDPR?
Is your business really ready for GDPR? What measures have you taken to ensure you’re in compliance?
With the GDPR taking effect this coming May, companies around the world have a long, potentially costly, road ahead of them to demonstrate that they are worthy of the trust that so many individuals place in them.
[et_pb_section bb_built=”1″][et_pb_row][et_pb_column type=”4_4″][et_pb_text admin_label=”Text – Title and Blurb” _builder_version=”3.0.98″ background_layout=”light”]
Webinar:Blockchain & Distributed Ledger Technology: The Good, The Bad, and The Ugly
This informative webinar on blockchain and distributed ledger technology, presented by TxMQ’s Chuck Fried, with co-hosts Craig Drabik and Miles Roty, dives into distributed ledger technology and how it has the potential to transform entire industries.
By 2022 at least one innovative business built on blockchain will be worth $10 Billion.
By 2030 30% of the global customer base will be made up of things, and those things will use blockchain as a foundational technology with which to conduct commercial activity.
By 2025, the business value by blockchain will grow to slightly over $176 Billion, then surge to exceed $3.1 Trillion by 2030.
How will Blockchain and Distributed Ledger Technology affect your industry?
[/et_pb_text][et_pb_text admin_label=”Text – Their Action and Webinar Recording Date” _builder_version=”3.0.98″ background_layout=”light”]
To watch the webinar recording from 01/18/2018 please fill out the form below.
Anyone who knows me knows that I am a big supporter of, and believer in distributed ledger technology, or DLT.
It has the potential to revolutionize the way we build certain types of solutions, and the way certain systems, software, and institutions, even people, interact.
That being said, I also believe a strong argument can be made that crypto currencies, at least in their current incarnation, are destined to fail. Full disclosure: I own no crypto currencies.
There is a foundational flaw in the use case of cryptocurrency. It is NOT easily transacted; often having lengthy or ungainly settlement times, requiring in almost all cases, conversion to fiat currency, and it’s generally ill suited to the very task it was designed to perform: storing value for transacting business.
It’s hard for people to use crypto currencies.
I have heard first hand, countless stories of transactions intended to be conducted using crypto currencies, where the parties wouldn’t agree to them without one or the other agreeing to guarantee value to some fiat currency.
If this continues and people aren’t able to use cryptocurrency as a currency, what then? Normal market rules would dictate a crash to zero.
But is this a market that follows normal market rules? What exactly is normal?
Fiat Currency, or Fiat Money:
Let’s back up and look at fiat currency. Take the US dollar.
Early on, the United States dollar was tied to the gold standard. The United States Bullion Depository, more commonly known as Fort Knox, was established as a store in the US for gold that backed the dollar.
In 1933, the US effectively abandoned this standard, and in 1971, all ties to gold were severed.
So what happened? Effectively nothing. Yet the US backs the dollar.
Why? The US dollar is intimately tied to our financial system by the Federal Reserve, which, as demonstrated for better or worse in the financial crisis of 10 years ago, will do everything in its power to shore up the currency when needed.
So we operate today with the shared belief, some might call it a shared delusion, that there is value in the money we carry in our wallets and in the bank statements we see online.
Is cryptocurrency approaching this level of shared belief?
Who will step in if crypto crashes? In short, no one. There is no governing body, by design, behind any cryptocurrency.
As I write this, all crypto currencies are down over 10%, some are down over 20%. Nothing will prop it back up other than buyers: speculators hoping to buy on the downswing, hoping to hold until it rises again.
So, is this a normal market? I say no, it is not. I see no ultimate destination on this journey, other than disappointment.
If you have risk capital to play with, go ahead, risk some on crypto if you wish.
Personally, I would rather invest my money in companies I can understand, with business models that make sense. That being said, in my case, this also means investing in my company’s work to build solutions on the technology underlying crypto currency, or distributed ledger technology.
You may be asking yourself, how can he support distributed ledger technology and not have faith in cryptocurrency?
The answer here is simple. The technology is solid, the use case of crypto is flawed. Java is solid, but not all java applications are good applications. Crytpo currency is just another application running on distributed ledger, and as I have posited herein, a bad one.
Chuck Fried is the President and CEO of TxMQ, a systems integrator and software development company focused on building technical solutions for their customers across the US and Canada.
Follow him at www.chuckfried.com, or @chuckfried on Twitter, or here on the TxMQ blog.