Technology and Government: San Francisco vs. New York

In a recent PandoMonthly interview, John Borthwick made a very interesting point. Many cities are trying to copy the success of Silicon Valley/Bay Area startups by being like San Francisco: hip, fun urban areas designed to attract young entrepreneurs and developers (Austin comes to mind). However, the relationship between tech and other residents is a strained one: witness graffiti to the effect of “trendy Google professionals raise housing prices” and the “startup douchebag” caricature.

New York, on the other hand, has a smaller startup culture (“Silicon Alley”) but much closer and more fruitful ties between tech entrepreneurs and city government. Mayor Bloomberg has been at the heart of this, with his Advisory Council on Technology and his 2012 resolution to learn to code. Bloomberg’s understanding of technology and relationship with movers and shakers in the industry will make him a tough act to follow.

Does this mean that the mayors of Chicago, Houston, or Miami need to be writing Javascript in their spare time? Of course not. But making an effort to understand and relate to technology professionals could yield great benefits.

Rather than trying to become the next Silicon Valley (a very tall order) it would be more efficacious for cities to follow New York’s model: ask not what your city can do for technology, but what technology can do for your city. Turn bus schedule PDF’s into a user-friendly app or–better yet, for many low-income riders–a service that allows you to text and see when the next bus will arrive. Instead of calling the city to set up services like water and garbage collection, add a form to the city’s website. The opportunities to make city life better for all citizens–not just developers and entrepreneurs–are practically boundless.

I was happy to see San Francisco take a small step in the right direction recently with the Open Law Initiative, but there is more to be done, and not just in the Bay Area. Major cities across the US and around the world could benefit from the New York model. See more of the Borthwick interview below:

Internet Sales Tax FAQ

sales-tax-santaWe’ve got a week of Internet politics-related topics queued up for you this week. Today we’ll take a look at the prospect of an internet sales tax. Later in the week we’ll discuss why The Great Gatsby still isn’t in the public domain, and then take an overview of the net neutrality debate. The FAQ’s below are a summary of this explainer from CNN.

What’s the current state of sales tax law? 

In the US Supreme Court’s last major decision on the issue (Quill Corp. v. North Dakota), it ruled that a retailer must have a physical presence in a state in order to be required to collect sales taxes in that state. Technically you are required to pay a use tax by your state if you order online from another state–just as you would be required to do so when purchasing physical goods outside your home state. But who actually does that? Virtually no one.

How much revenue would an online sales tax bring in?

The National Conference of State Legislatures estimated that states could gain $23 billion from sales taxes on internet commerce.

What’s going to change, and when? 

Last week the Senate voted 69-27 in favor of the so-called Marketplace Fairness Act. It now has to pass the House, where it will likely face more resistance. The Obama administration supports the bill, so if it passes the House it will become law. Even if passed the changes will go into effect no earlier than October 1, 2013. If you have any major online purchases in mind you may want to make them before then–another stimulus of sorts.

Risk, Overreaction, and Control

11-M_El_How many people died because of the September 11 attacks? The answer depends on what you are trying to measure. The official estimate is around 3,000 deaths as a direct result of hijacked aircraft and at the World Trade Center, Pentagon, and in Pennsylvania. Those attacks were tragic, but the effect was compounded by overreaction to terrorism. Specifically, enough Americans substituted driving for flying in the remaining months of 2001 to cause 350 additional deaths from accidents.

David Myers was the first to raise this possibility in a December, 2001, essay. In 2004, Gerd Gigerenzer collected data and estimated the 350 deaths figure, resulting from what he called “dread risk”:

People tend to fear dread risks, that is, low-probability, high-consequence events, such as the terrorist attack on September 11, 2001. If Americans avoided the dread risk of flying after the attack and instead drove some of the unflown miles, one would expect an increase in traffic fatalities. This hypothesis was tested by analyzing data from the U.S. Department of Transportation for the 3 months following September 11. The analysis suggests that the number of Americans who lost their lives on the road by avoiding the risk of flying was higher than the total number of passengers killed on the four fatal flights. I conclude that informing the public about psychological research concerning dread risks could possibly save lives.

Does the same effect carry over to other countries and attacks? Alejandro López-Rousseau looked at how Spaniards responded to the March 11, 2004, train bombings in Madrid. He found that activity across all forms of transportation decreased–travelers did not substitute driving for riding the train.

What could explain these differences? One could be that Americans are less willing to forego travel than Spaniards. Perhaps more travel is for business reasons and cannot be delayed. Another possibility is that Spanish citizens are more accustomed to terrorist attacks and understand that substituting driving is more risky than continuing to take the train. There are many other differences that we have not considered here–the magnitude of the two attacks, feelings of being “in control” while driving, varying cultural attitudes.

This post is simply meant to make three points. First, reactions to terrorism can cause additional deaths if relative risks are not taken into account. Cultures also respond to terrorism in different ways, perhaps depending on their previous exposure to violent extremism. Finally, the task of explaining differences is far more difficult than establishing patterns of facts.

(For more on the final point check out Explaining Social Behavior: More Nuts and Bolts for the Social Sciences, which motivated this post.)

When will telephone polls have their “Literary Digest” moment?

literary-digestMention the name Literary Digest to a pollster and they will instantly know what you are talking about. Literary Digest is well-known for their famously wrong prediction that Kansas Republican Alfred Landon would beat Franklin Delano Roosevelt in the presidential election of 1936. Part of the problem was that, despite a sample size of 2.4 million and a response rate of nearly 25 percent, the groups that Literary Digest surveyed were not representative of voters. Respondents tended to be wealthier than average, since they were drawn from the Digest‘s subscribers as well as automobile registries and telephone books. Using a sample of “only” 50,000, George Gallup was able to predict the outcome correctly and the Digest soon went out of business.

What people forget is that 1936 was not the first time that Literary Digest had conducted a presidential poll or made a prediction. In the previous four elections–dating back to 1920–the Digest had always been correct. The 1936 election was a “falling off the cliff” moment for their polling methodology.

On Friday David Rothschild of Microsoft Research came and gave a series of talks for the Duke political methodology group. He covered a number of interesting topics, including prediction markets and online experiments. There was also a presentation about his work-in-progress analyzing 2012 polling data collected via XBox Live. One takeaway from that presentation is that, correcting for demographics of likely voters (as you might expect, XBox respondents were overwhelmingly male and young) the Xbox Poll tends to track the Pollster polling average.

An important issue that came up during the presentation was non-response bias. Telephone surveys now have vanishingly small response rates. They are further complicated by the shift to cell phones. Pollsters cannot use a computer to randomly dial (RDD) cell phones: the numbers have to be dialed by hand, which raises the time required and thus the costs of the poll. People are not “randomly” switching to cell phones either, so this biases the poll.

The demise of telephone polls will not be gradual. Organizations like Gallup will have their own Literary Digest moment in which their methodology–which has been highly accurate for years–will fall off a cliff. It is only a matter of time.

Communication Technology and Politics

Cell phone coverage (black) and conflict locations (grey) in Africa (Pierskalla and Hollenbach, 2013: Fig. 1)

Cell phone coverage (black) and conflict locations (grey) in Africa (Pierskalla and Hollenbach, 2013: Fig. 1)

We have been on a technology kick this week, first talking about modern etiquette and then how technology improved traffic in LA. Today I want to point out two neat papers at the intersection of communication technology and politics.

The first article deals with “narrowcasting”-type technologies. Pierskalla and Hollenbach (2013) analyze the association between cell phone coverage and conflict in Africa.* They use 55×55 km grid cells rather than the more conventional country-year observational units for their analysis. Here’s the abstract:

The spread of cell phone technology across Africa has transforming effects on the economic and political sphere of the continent. In this paper, we investigate the impact of cell phone technology on violent collective action. We contend that the availability of cell phones as a communication technology allows political groups to overcome collective action problems more easily and improve in-group cooperation, and coordination. Utilizing novel, spatially disaggregated data on cell phone coverage and the location of organized violent events in Africa, we are able to show that the availability of cell phone coverage significantly and substantially increases the probability of violent conflict. Our findings hold across numerous different model specifications and robustness checks, including cross-sectional models, instrumental variable techniques, and panel data methods.

Another neat paper I came across recently deals more with broadcasting technologies. Adena et al (2013) explore the association between radio broadcasts in pre-war Germany and pro- or anti-Nazi sentiment. The identification strategy is rather simple: before the Nazi party took power, radio broadcasts were anti-Nazi. That changed in 1933 when the Nazis took over. According to their paper it took a very short time for sentiments to change:

How far can media undermine democratic institutions and how persuasive can it be in assuring public support for dictator policies? We study this question in the context of Germany between 1929 and 1939. Using quasi-random geographical variation in radio availability, we show that radio had a significant negative effect on the Nazi vote share between 1930 and 1933, when political news had an anti-Nazi slant. This negative effect was fully undone in just one month after Nazis got control over the radio in 1933 and initiated heavy radio propaganda. Radio also helped the Nazis to enroll new party members and encouraged denunciations of Jews and other open expressions of anti-Semitism after Nazis fully consolidated power. Nazi radio propaganda was most effective when combined with other propaganda tools, such as Hitler’s speeches, and when the message was more aligned with listeners’ prior as measured by historical anti-Semitism.

There are several nice features that these papers have in common. The first is spatially disaggregated data, allowing for more fine-grain analysis of variation over space. (Although as a commenter at one ISA panel pointed out, this is not necessarily useful for all research questions.) Another feature I like is that both go to great lengths to test the robustness of their findings–this is a positive development for the field and I hope the trend continues.

See also: Thomas Zeitzoff sends along two more papers on the topic: “Opium for the Masses: How Foreign Media Can Stabilize Authoritarian Regimes” (Kern and Hainmueller, 2009) and “Propaganda and Conflict: Theory and Evidence from the Rwandan Genocide” (Yanagizawa-Drott, 2012).

______________

*Note: Jan got his PhD at Duke and Florian is currently in the program. Both are friends of mine.

Ruby’s Benevolent Dictator

The Ruby Logo

The Ruby Logo

The first version of the Ruby programming language was developed by Yukihiro Matsumoto, better known as “Matz,” in 1995. Since then it has become especially popular for web development thanks to the advent of Rails by DHH. A variety of Ruby implementations have also sprung up, optimized for various uses. You may recall our recent discussion of RubyMotion as a well to develop iOS apps in Ruby. As with human languages, the spread and evolution of computer languages raises an interesting question: how different can two things be and still be the same?

To run with the human language example for a bit, consider the following. My native language is American English. (There are a number of regional variants within the US, so even the fact that American English is a useful category is telling.) I would recognize a British citizen with a cockney accent as a speaker of the same language, even though I would have trouble understanding him or her. I would not, however, recognize a French speaker as someone with whom I shared a language. The latter distinction exists despite the relative similarity between the languages–a shared alphabet, shared roots in Latin, and so on. So who decides whether two languages are the same?

In the case of human languages this is very much an emergent decision, worked out through the behavior of numerous individuals with little conscious thought for their coordination. This is where the human/computer language analogy fails us. The differences between computer languages are discrete, not continuous–there are measurable differences and similarities between any two language implementations, and intermediate steps between one implementation and another might not be viable. So who decides what is Ruby and what is not?

That is the question Brian Shirai raised in a series of posts and a conference talk. As of right now there is no clear process by which the community decides the future of Ruby, or what counts as a legitimate Ruby implementation. Matz is a benevolent dictator–but maybe not for life. His implementation is known to some as MRI–”Matz’s Ruby Implementation,” with the implication that this is just one of many.

Shirai is proposing a process by which the Ruby community could depersonalize such decisions by moving to a decision-making council. This depersonalization of power relations is at the heart of what it means to institutionalize. Shirai’s process consists of seven steps:

  1. Ruby Design Council made up of representatives from any significant Ruby implementation, where significant means able to run a base level of RubySpec (which is to be determined).
  2. A proposal for a Ruby change can be submitted by any member of the Ruby Design Council. If a member of the larger Ruby community wishes to submit a proposal, they must work with a member of the Council.
  3. The proposal must meet the following criteria:
    1. An explanation, written in English, of the change, what use cases or problems motivates the change, how existing libraries, frameworks, or applications may be affected.
    2. Complete documentation, written in English, describing all relevant aspects of the change, including documentation for any specific methods whose behavior changes or behavior of new methods that are added.
    3. RubySpecs that completely describe the behavior of the change.
  4. When the Council is presented with a proposal that meets the above criteria, any member can decide that the proposal fails to make a case that justifies the effort to implement the feature. Such veto must explain in depth why the proposed change is unsuitable for Ruby. The member submitting the proposal can address the deficiencies and resubmit.
  5. If a proposal is accepted for consideration, all Council members must implement the feature so that it passes the RubySpecs provided.
  6. Once all Council members have implemented the feature, the feature can be discussed in concrete terms. Any implementation, platform, or performance concerns can be addressed. Negative or positive impact on existing libraries, frameworks or applications can be clearly and precisely evaluated.
  7. Finally, a vote on the proposed change is taken. Each implementation gets one vote. Only changes that receive approval from all Council members become the definition of Ruby.

Step 3B is a particularly interesting one for students of politics. As you may have guessed, Matz is Japanese. (This is somewhat ironic since Ruby is the currently the most readable language for English speakers–see this example if you don’t believe me.) Many discussions about Ruby take place on Japanese message boards, and some non-Japanese developers have even learned Japanese so that they can participate in these discussions. English is the lingua franca of the international software development community, so Shirai’s proposal makes sense but it is not uncontroversial.

In Shirai’s own words this proposal would provide the Ruby community with a “technology for change.” That is exactly what political institutions are for–organizing the decision-making capacity of a community. This proposal and its eventual acceptance, rejection, or modification by the Ruby community will be interesting for students of politics to keep an eye on, and may be the topic of future posts.

The Randomness of Borders

Fifty US States Redrawn with Equal Population

Fifty US States Redrawn with Equal Population

Rivers and oceans help to form natural boundaries, but if it’s a straight line you can bet that it’s essentially random–and it might even be in the wrong place:

Four Corners Monument, which marks the intersection of Arizona, Colorado, New Mexico and Utah, lies 1,807 feet (550 meters) east of where it would have been placed in 1875 had surveyor Chandler Robbins used a modern GPS device to pinpoint the coordinates he was tasked with locating.

Anyway, it doesn’t matter now. Once set in stone, monuments become law. “Even if the surveyor made some grand mistake, once the monument is set and accepted, end of story. Where the monument is, that’s where the boundary is,” said Dave Doyle, chief geodetic surveyor at the National Geodetic Survey (NGS).

Those straight lines have a value though–they are easy to verify and make it simple to calculate the land area within a specified region. Linear borders for a parcel of land can increase its value up to 30 percent say economists Gary Libecap and Dean Lueck:

They look at the 116 billion square meters of land in the state of Ohio. Because of an accident of history, a large fraction of these square meters were assembled into irregularly shaped parcels via an uncoordinated process of private claims by independent individuals. The rest were assembled first into rectangular parcels along the lines of the survey called for in the Northwest Ordinance and then transfered to private ownership.

It’s worth reading the paper to get all the details, but the punch line is that this difference in the initial bundling of small bits of land had a lasting effect on how they are used. Today, more than 200 years later, a flat square meter is worth 30% less if it was initially assigned to an irregularly shaped parcel.

I have been reading up on border arrangements in Europe and Africa lately as part of a project on state-making. The best introduction I have found so far is that of Jeffrey Herbst, who argues that maps and formal boundaries were not developed in Africa because low population densities made them useless. In fact, it took until 1975 for population density of Africa to rival that of Europe in the 15th century. For another look at the randomness of borders, check out this paper by John McCauley and Dan Posner.

See also: Ian Lustick on Israel’s borders

Leadership Targeting and Perverse Incentives

Enrique Pena Nieto with supporters. Photograph: Daniel Aguilar/Getty Images

Enrique Pena Nieto with supporters. Photograph: Daniel Aguilar/Getty Images

If targeting of Drug Trafficking Organization (DTO) leaders in Mexico has contributed to high levels of violence, as I argue in a working paper, then why hasn’t the Mexican government stopped the policy? Under former president Felipe Calderon there were a number of possible answers, included the fact that his get-tough policy toward crime was a major part of his campaign strategy in 2006. But that does not explain why the policy has persisted under the new president.

When Enrique Pena Nieto won the 2012 election he promised that his crime fighting policy would aim to “reduce violence and above all protect the lives of all Mexicans.” The new administration acknowledges that leadership targeting led to increased violence, and a number of experts seem to agree. So why hasn’t the policy been changed?

The answer comes down to cold hard cash, and lots of it. US officials have been strongly supportive of DTO leadership targeting, echoing as it does the American policy of targeting terrorist leaders. And they have backed up that rhetoric with generous funding for Mexican security forces:

On Monday, Interior Minister Miguel Angel Osorio Chong said the strategy caused a fragmentation of criminal groups that had made them “more violent and much more dangerous,” as they branched out into homicide, extortion, robbery and kidnapping.

The next day, Jesus Murillo Karam, the new attorney general, said in a radio interview that the strategy was responsible for spawning 60 to 80 small and medium-sized organized crime groups.

But just because the strategy has taken some hits doesn’t mean it’s dead. And Peña Nieto, who took office Dec. 1, is unlikely to kill it….

Peña Nieto is also unlikely to jeopardize the generous security assistance provided by the United States, which helped design the kingpin strategy. The U.S. is intimately involved in carrying it out, providing intelligence on drug leaders’ whereabouts and spending millions to strengthen the Mexican security forces who act on that intelligence.

All of which probably explains why, shortly after the ministers’ criticism of kingpin, a top presidential advisor told The Times that the new government had no plans to abandon it.

“That will not stop at all,” said the advisor, who declined to be identified because he was not authorized to speak on the record.

One can appreciate the rock and hard place between which Pena Nieto finds himself. His party has been criticized for being in the pocket of the cartels, so he cannot afford to look weak. There are also the entrenched interests of the military and police to keep in mind–they have no interest in giving up power. Unfortunately for the tens of thousands of Mexicans who have lost their lives or loved ones to violence over the last seven years, their voice in the government has not kept his word.

The New Netflix Strategy: Gambling on House of Cards

NetflixGamblingOne week ago Netflix introduced its first original series, House of Cards. The series details the life and crimes of (fictional) US Congressman Francis Underwood and his wife Claire who runs a nonprofit. What is unique about the series is that the entire season–13 episodes–was released all at once. Netflix and streaming services like it have acclimated us to watching shows in bulk like this. Is the new model sustainable?

I hope so, and Atlantic Wire reporter Rebecca Greenfield thinks the answer is yes:

With Netflix spending a reported $100 million to produce two 13-episode seasons of House of Cards, they need 520,834 people to sign up for a $7.99 subscription for two years to break even. To do that five times every year, then, the streaming TV site would have to sign up more 2.6 million subscribers than they would have. That sounds daunting, but at the moment, Netflix has 33.3 million subscribers, so this is an increase of less than 10 percent on their current customer base. Of course, looking at Netflix’s past growth, that represents pretty reasonable growth for the company that saw 65 percent growth from 20 million to over 33 million world-wide streaming customers. Much of that growth, however, comes from new overseas markets. But, even in the U.S., from one year ago, Netflix saw about 13 percent streaming viewer growth jumping from 24 million to 27 million.

The five times per year figure comes from a plan that Netflix CEO Reid Hastings revealed in an interview with GQ. Paying for subscription television like this is not a new idea–it’s a similar business model to HBO. But Netflix seems to have the execution right, at least with this first foray.

Perhaps the biggest difference with convention television is that it doesn’t matter how many people watched House of Cards during its debut week. As Hastings said in a letter to investors two weeks ago:

Linear channels must aggregate a large audience at a given time of day and hope the show programmed will actually attract enough viewers despite this constraint. With Netflix, members can enjoy a show anytime, and over time, we can effectively put the right show in front of members based on their viewing habits. Thus we can spend less on marketing while generating higher viewership.

For linear TV, the fixed number of prime-time slots mean that only shows that hit it big and fast survive, thus requiring an extensive and expensive pilot system to keep on deck potential replacement shows. In contrast, Internet TV is an environment where smaller or quirkier shows can prosper because they can find a big enough audience over time. In baseball terms, linear TV only scores with home runs. We score with home runs too, but also with singles, doubles and triples.

Because of our unique strengths, we can commit to producing and publishing “books” rather than “chapters”, so the creators can concentrate on multi-episode story arcs, rather than pilots. Creators can work on episode 11 confident that viewers have recently enjoyed episodes 1 to 10. Creators can develop episodes that are not all exactly 22 or 44 minutes in length. The constraints of the linear TV grid will fall, one by one.

I look forward to seeing more of this strategy, and as I proceed with House of Cards you may even get a post on its politics.

Was the Civil War a Constitutional Fork?

Confederate ConstitutionShortly after Aaron Swartz’s untimely suicide, O’Reilly posted their book Open Government for free on Github as a tribute. The book covers a number of topics from civil liberties and privacy on the web to how technology can improve government,  with each chapter written by a different author. My favorite was the fifth chapter by Howard Dierking. From the intro:

In many ways, the framers of the Constitution were like the software designers of today. Modern software design deals with the complexities of creating systems composed of innumerable components that must be stable, reliable, efficient, and adaptable over time. A language has emerged over the past several years to capture and describe both practices to follow and practices to avoid when designing software. These are known as patterns and antipatterns.

The chapter goes on to discuss the Constitution and the Articles of Confederation as pattern and antipattern, respectively. In the author’s own words he hopes to “encourage further application of software design principles as a metaphor for describing and modeling the complex dynamics of government in the future.”

In the spirit of Dierking’s effort, I will offer an analogy of my own: civil war as fork. In open source software a “fork” occurs when a subset of individuals involved with the project take an existing copy of the code in a new direction. Their contributions are not combined into the main version of the project, but instead to their new code base which develops independently.

This comparison seems to hold for the US Civil War. According to Wikipedia,

In regard to most articles of the Constitution, the document is a word-for-word duplicate of the United States Constitution. However, there are crucial differences between the two documents, in tone and legal content, and having to do with the topics of states’ rights and slavery.

Sounds like a fork to me. There’s a full list of the “diffs” (changes from one body of text or code to another) on the same wiki page. But to see for myself, I also put the text of the US Constitution on Github, then changed the file to the text of the CSA Constitution. Here’s what it looks like visually:

usa-csa-diffs

As the top of the image says, there are 130 additions and 119 deletions required to change the US Constitution into that of the Confederacy. Many of these are double-counts since, as you can see, replacing “United States” with “Confederate States” counts as both a deletion of one line and an addition of a new one.

I did not change trivial differences like punctuation or capitalization, nor did I follow the secessionists’ bright idea to number all subsections (which would have overstated the diffs). Wikipedia was correct that most of the differences involve slavery and states’ rights. Another important difference is that the text of the Bill of Rights is included–verbatim–as Section 9 of Article 1 rather than as amendments.

In other words, the constitution of the CSA was a blatant fork of the earlier US version. Are there other cases like this?