jump to navigation

Health Care – The “simple” economics July 15, 2013

Posted by stewsutton in Economics, Fitness, Healthcare, Politics.
add a comment

Within the U.S.A. there is an ongoing debate/conversation/argument/fight related to healthcare and how we can make it better.  While there are numerous complexities to the system as it currently exists, there are three principles that can be referenced in a common sense discussion of the healthcare topic:

  1. Cost
  2. Quality
  3. Access

In a book written by Rob Rodin titled “Free Perfect and Now” (the three insatiable customer demands), we see a simple economic example of three principles that you can never have completely or 100% at the same time.  A thing can be free, and you can possibly even get it immediately (now), but it will not be perfect.  For it to be perfect, it would need to be able to address a multitude of different needs.  To make something free and available now, requires that it address a specific (limited) set of needs.

In the same way we can look at the other dimensions of perfection which if we strive to make a thing “perfect”, it will likely not be free and in the quest of specifying its “perfectness” we have guaranteed that it will not be available now.  So with these things in mind, lets consider the simple principles of healthcare economics (cost, quality, and access).  They follow the same dimensions Free-Perfect-Now as described in Rob Rodin’s book.

For many years America had the best healthcare system in the world.  And even today, one can say with confidence that the quality of American healthcare is the highest in the world.  However in our quest for quality, we have introduced some significant complexity into the system.  This it seems is driven by a belief that the healthcare system can be engineered at a massive national level.  There it seems is where we have made a bet that is not working out quite like we planned.

Not that long ago (in the 50’s and 60’s) there was a lot of research going on in healthcare and that research found its way into the practice of healthcare through the individual motivation of practicing doctors, nurses, and other healthcare professionals.  Practicing physicians learned new things and they applied these new things to improving their patients outcomes.  Things were working pretty well.

Then it seems a movement arose to try and “assure” that all medical professionals were following the same quality guidelines.  And moreover, these quality guidelines would need to be enforced.  About the same time there is a desire to make it possible for patients to be able to afford the really expensive procedures so we see medical insurance become a commonplace item and increasingly an item that is a key part of a compensation package when working for a company. Well now all companies did this and that is where we start to see some inequality in the system.  Not everyone can “afford” to get “access” to the higher quality of care that is offered to those with insurance.

In an attempt to address the “cost of access” problem, the concept of managed care is introduced with the idea that a free-market system (at a bigger scale) can drive efficiency and enable the delivery of high-quality (but expensive) procedures at a lower price point and therefore enable access to patients that could not previously afford such services.

And while all of this is happening, we see an increasing burden and financial catastrophe taking shape within American hospitals.  They are using the high-end medical procedures being delivered to finance “free” services to those that cannot afford to pay.  To a great extent, the hospitals become the front-lines for delivery of free healthcare to those in need without the ability to pay.  So the economics of this strange configuration drives up the costs of the “paid” procedures and since there are now layers of administration and oversight that must be compensated within this delivery model, the costs further rise.

Then along comes the affordable healthcare act (Obamacare).  This system is proposed to solve the skyrocketing costs of healthcare while also improving access to 100% of the population and at the same time making sure that the quality stays the same.  In effect the Affordable Healthcare Act seeks to make healthcare free, perfect, and now.  Well we have seen that is just not possible and trying to engineer the impossible on a grand national scale makes it even less likely.

So my suggestion is to go back to the way we used to practice healthcare before the days of managed care, big insurance, and lots of government administrative regulations.  We can have local doctors that get good educations, keep abreast of new things, and at their individual pace, factor these new things into their medical practices and thereby improve outcomes for their patients.

With such a system we will certainly have “good doctors” and “bad doctors” and the resulting outcomes of this disparity in quality.  But since we cannot engineer 100% success in cost, quality, and access, this highly resilient and distributed approach can more effectively serve the national needs by focusing on the local community needs.  It is a simple approach, and we need not make the problem complex just so we feel compelled to solve it with a massive national program.  The solution it seems is to back away from the massive national program and to return to basic principles that have always worked best in the aggregate by fostering a provider-client relationship that is rooted in local community.

Advertisements

The 21st Century Cyber-Economy June 1, 2012

Posted by stewsutton in Economics, Information Policy, Information Technology, Knowledge Management, Politics, Security.
add a comment

Disclaimer:  This post is composed based on a review of publicly available information and a reflective commentary on events widely reported within the press.  No “inside” information or perspective has colored or enhanced the commentary presented here, and these opinions are solely and completely associated with the author.

As a technologist with an interest in preserving knowledge and improving the way we work with information, I am distracted by the subject of cyber warfare.  Its all over the news and hard to ignore if you have access to the Internet, TV, or national newspapers.  According to a report in the New York Times, President Obama accelerated a plan to cripple Iran’s uranium enrichment program by hacking into the program’s home base, Natanz. Although the program began in 2006, the Times reports that Obama pushed increasingly aggressive attacks, even in his early days in office.

The New York Times writer David E. Sanger claims that the U.S. developed a worm with Israel called Stuxnet.  This is a very sophisticated virus – so advanced that Symantec and other security experts could not figure out who made it. The Stuxnet worm crippled 20% of the Iranian centrifuges at Natanz.  Analysts and critics are still debating whether Stuxnet seriously impeded Iran’s nuclear ambitions or just slowed them down a bit.  However, nobody is denying the serious implications this tactic has for modern warfare.

This TED Talk video from March 2011 describes how when first discovered in 2010, the Stuxnet computer worm posed a baffling puzzle. Beyond its sophistication at the time, there loomed a more troubling mystery: its purpose. Ralph Langner,  a German control system security consultant. and his team helped crack the code that revealed this digital warhead’s final target. In a fascinating look inside cyber-forensics, he explains how — and makes a bold (and, it turns out, correct) guess at its shocking origins.

The Stuxnet cyber attack represents a new formally established method of warfare.  And more recently, the equally insidious Flame virus is capturing the attention of cyber-watchers. There is still speculation on who originated this specific virus.  But the frequency of cyber-actions appears to be on the uptick.

The following references are links to press sites where the ownership and origin of Stuxnet and other viruses within this emerging Cyberwar are being declared.

  1. Slashgear
  2. Washington Post
  3. PC Mag
  4. Ars Technica
  5. Reddit
  6. Telegraph
  7. Extreme Tech
  8. New York Times
  9. Tech Crunch
  10. Yahoo News

With all of this making big news today June 1, 2012, we have some interesting counter-dialog that emerged in a publication on May 31, 2012 from Adam P. Liff, a Doctoral Candidate in the Department of Politics at Princeton University.  The title of the article that was published in the Journal of Strategic Studies is: Cyberwar: A New ‘Absolute Weapon’? The Proliferation of Cyberwarfare Capabilities and Interstate War.  Within the article the central objective is to explore the implications of the proliferation of cyberwarfare capabilities for the character and frequency of interstate war.  The contrarian view expanded on within the paper is that cyberwarfare capabilities may actually decrease the likelihood of war.

In one hypothesis, computer network attacks (CNA) represent a low-cost yet potentially devastating asymmetric weapon.  The hypothesis is that asymmetric warfare will increase the frequency of war by increasing the probability of war between weak and strong states that would otherwise not fight due to the disparity between conventional military strength.

A second hypothesis put forward by Liff is that the plausible deniability and difficulty in attributing cyberatacks could lead potential attackers to be less fearful of retaliation, and thereby use CNA where they would not dare attack with conventional weapons.  As it took a while for the disclosure of the Stuxnet virus to be attributed, it is also likely that the time required to attribute an attack will accelerate.  Its all a matter of computer algorithms, processing power, and effective application of cyber-forensics.  Those with the bigger computers, better algorithms, and smarter scientists have a decided edge in conducting an effective investigation of the cyber war scene of the crime.

Another hypothesis put forward by Liff is that the difficulty of defending against cyberattacks will render states exceedingly vulnerable to surprise attacks.  And since state will not be able to afford to attack first, the offensive advantage of CNA may increase the frequency of preemptive war.  At the present time however it seems that for a few more years, the cyberwarfare domain will concede an advantage to actors that have considerably more resources; thereby offering an offensive advantage to those actors.

A summary conclusion offered by Liff is that in some situations CNA as an asymmetric weapon may actually decrease the frequency of war by offering relatively weak states an effective deterrent against belligerent adversaries.  While this opinion is interesting, it seems unlikely since the ability for weak states to guard their secrets may directly affect the confidence in another states success of a preemptive cyberattack.

So with all of this “lead-up”, we can consider the implications of this “cyber stuff” and how it affects our day-to-day information economy and the way we work.  To provide further foundation to this discussion, we need to acknowledge the current information ecosystem that is increasingly prevalent within our modern economy and the relationship of several major components within that ecosystem.  First let us simply describe the information ecosystem:

  1. Wireless access points
  2. Persistent network connections
  3. Data and applications in the cloud

To provide a more visual example of these three elements of our modern information ecosystem in action consider the person using an iPhone/iPad (access point) over a WiFi in a coffee shop (persistent network connection) and updating their Facebook or Twitter account with new information (data and applications in the cloud).  These actions and examples go far beyond the simple consumer visualization here, and are increasingly the “mix” for business applications.  We are rapidly moving from desktop computers to laptops to tablets and for the majority of information transactions in the future, the mobility will extend into wearable computing devices like fancy eyeglasses (Google Glasses).  This increased mobility is irreversible and the supporting technology that supports and speeds this mobility will win in the marketplace and it will further establish fully mobile computing as the way we transact with information.  The difficulty however is that the increased mobility introduces an expanded reliance on networks and big data/app centers where all the digital stuff we work with is stored and served up.

So while we had the “big 3” automakers back in the 60’s and 70’s, we now have the “big 3” digital service providers of the 21st century in the form of Amazon, Apple, and Google.  You have got your iPhone, iPad, Kindle, or Android device accessing an increasing portfolio of content and applications from the “big 3” digital service providers (think Kindle Books, Apple iTunes Music, and Google Drive for your documents, and other digital data).

So while the cyber threat is spoken about in relationship to national infrastructure and associated information assets, the threat to basic day-to-day personal information systems is also a reality.  Now it would not be a national security incident for people to lose access to their music on iTunes, or their picture albums on Flickr, or their collection of novels from the Kindle store, it is the increasing ubiquity within which we traverse this collective set of services that can be forecasted as a more complete reliance on similar arrangements for everything we do within the modern information economy.  Digital wallets in our mobile phones, online banking from wherever we are, controlling access to systems within our homes, and even the education system with increasing emphasis toward online learning and instruction.  It is a pretty safe bet that the information ecosystem will be pervasive and all information will need to successfully traverse this ecosystem.

Our daily lives (both business and personal) will become increasingly dependent on the reliability and performance of this information ecosystem.  It is a relatively simple inference to conclude that the information ecosystem will be under constant attack in the “cyber domain” and that will affect a more complex relationship between government-based protection of our “cyber border” in a manner not that far removed from how we protect our physical border.  Will this have a profound and lasting effect on the way we interact with our information?  Certainly it will.  There will be less anonymity within information transactions (since strong tie-ins with verifiable identity are foundational to improved security.  So you will be leaving digital “breadcrumbs” of digital transactions wherever you go within the cyber domain.  How much of this digital history will need to be publicly accessible, and how much will remain private will be the subject of much dialog and innovation, and services, and likely government policy.  It is going to be, no, it already is a brave new world and the digital cyberwar will just be another aspect to the 21st century way of life.