The truth about income inequality in the US is absolutely staggering…
I’ve taken a short break from blogging, but this was too funny not too post!
Well this is an interesting development. Arkansas Gov. Mike Bebe (D) is pursuing a creative plan to provide health coverage to poor and low income workers in his state. Instead of adding thousands of people to the Medicaid program, as the ACA (Obamacare) originally intended, Arkansas will simply enroll them in Obamacare itself. Remember that Obamacare’s structure is to pair an individual mandate to purchase health insurance with subsidies to offset the cost of the premiums.
The feds have given Arkansas permission to pursue a plan that would provide private health insurance to anyone between 0-138 percent of the federal poverty level, giving coverage to more than 200,000 of the currently uninsured. The government would pay for the entirety of the premium, though consumers might be subject to some co-pays.
Now I have not seen any projections of how this decision could affect health care costs, but I cannot imagine them going anywhere but down. Even though Medicaid does provide health coverage for the poor, it is notoriously inefficient, ineffective, and costly. Whether you like like health insurance companies or not, you must admit customers get more health services on the dollar than with Medicaid. Perhaps expanding Obamacare can further reduce costs to the taxpayer by eliminating Medicaid altogether.
If anything, this development is more evidence that Obamacare is not a socialist takeover of health care. People who used to receive costly government health care in the form of Medicaid will not be buying private insurance. Not exactly the second coming of Chairman Mao. In this case its the precise opposite of socialism; people are receiving less government services.
For some actual in-depth analysis of Medicare and health policy in general, check out the excellent SocioPolitical Dysfunction. I’m sure he has some thoughts on this.
David Frum chats about improving the GOP by learning from the Tories.
Daniel Larison writing for The American Conservative explains why Huntsman failed to gain any traction in the 2012 Republican primaries.
Republican rejection of Huntsman wasn’t because of his record on social and cultural issues, which was actually quite conservative and arguably more conservative on social issues than most of the Republican field that year. He wasn’t rejected because of the domestic agenda he proposed during his presidential run, which included economic proposals that satisfied The Wall Street Journal and his early endorsement of Ryan’s budget proposal. On almost every issue, Huntsman was as far to the right (conventionally defined) as his competitors, and sometimes he was to the right of almost all of them. No, he was mostly rejected on account of his non-confrontational style and diplomatic political persona, his support for withdrawing earlier from Afghanistan, and the fact that he was appointed ambassador to China by a Democratic president. If Huntsman had been judged on his record and the substance of what he was proposing to do, presumably many conservatives dissatisfied with the available choices would have rallied behind him. Of course, just the opposite happened. Hawks absurdly dismissed him as being “to the left” of Obama on foreign policy, and despite being the only Republican candidate with meaningful foreign policy experience he was written off because he failed to conform to everything that Republican hard-liners wanted. Huntsman’s experience is a reminder of the overwhelming, built-in opposition inside the party to any advocacy for foreign policy restraint, no matter how mild it may be.
More on Jon Huntsman here.
An article at Slate floats a fascinating question: “what happens when machines are so powerful they can make discoveries no human could possibly understand?” What would the implications be if (or when) we reach that point?
But what if it were possible to create discoveries that no human being can ever understand? For example, if I were to give you a set of differential equations, while we have numerical and computational methods of handling these equations, not only could it be difficult to solve them mathematically, but there is a decent chance that no analytical solution even exists.
So what of this? Does such a hint of non-understandable pieces of reasoning and thought mean that eventually there will be answers to the riddle of the universe that are going to be too complicated for us to understand, answers that machines can spit out but we cannot grasp? Quite possibly. We’ve already come close. A computer program known as Eureqa that was designed to find patterns and meaning in large datasets not only has recapitulated fundamental laws of physics but has also found explanatory equations that no one really understands. And certain mathematical theorems have been proven by computers, and no one person actually understands the complete proofs, though we know that they are correct. As the mathematician Steven Strogatz has argued, these could be harbingers of an “end of insight.” We had a wonderful several-hundred-year run of explanatory insight, beginning with the dawn of the Scientific Revolution, but maybe that period is drawing to a close.
So what does this all mean for the future of truth? Is it possible for something to be true but not understandable? I think so, but I don’t think that that is a bad thing. Just as certain mathematical theorems have been proven by computers, and we can trust them, we can also at the same time endeavor to try to create more elegantly constructed, human-understandable, versions of these proofs. Just because something is true, doesn’t mean that we can’t continue to explore it, even if we don’t understand every aspect.
But even if we can’t do this—and we have truly bumped up against our constraints—our limits shouldn’t worry us too much. The non-understandability of science is coming, in certain places and small bits at a time. We’ve grasped the low-hanging fruit of understandability and explanatory elegance, and what’s left might be possible to be exploited, but not necessarily completely understood. That’s going to be tough to stomach, but the sooner we accept this the better we have a chance of allowing society to appreciate how far we’ve come and apply non-understandable truths to our technologies and creations.
What do you think? Could you trust the findings of machines that no human could ever verify?