Data-Smart City Solutions


White Paper: Regulation, the Internet Way

A Data-First Model for Establishing Trust, Safety, and Security | Regulatory Reform for the 21st Century City

By Nick Grossman


We are in the transition from an industrial society to an information society. As we've learned from the first 25 years of the life with the web, in the world of networked information, problems get solved differently – almost oppositely – than they did previously.

When distribution is free, participation is frictionless, and information is abundant, the order of operations for many tasks is flipped. My colleague at Union Square Ventures, Albert Wenger, describes this as series of profound inversions.[1] For example: publish first, then edit (Wikipedia, blogging); get paid, then work (Kickstarter); start selling, then build your reputation (eBay). Contrasted to the previous model for launching a product or idea – intense study and market research prior to launch, with extremely high costs for adjusting after the fact – the internet model is “release early and often”: learning as you go and iterating quickly and inexpensively, based on lots of data.

We are now facing this inversion in the area of regulation.

Our current regulations – intended to ensure trust, safety, and security in an unsure world – were designed for the industrial era, where change happened over decades and information was expensive. This model, which we’ll refer to as the “1.0” model, took an “up-front permission” approach, based on central licensing and permitting as a prerequisite for acting.

Fast forward 100 years: Internet-based connectivity has both radically lowered the cost of accessing information and redesigned our models for establishing trust across wide networks of people. For example, web and mobile platforms (such as Uber, Airbnb, and Facebook) have developed sophisticated (though still imperfect) internal systems for ensuring trust, safety, and security within their communities. Out of necessity, these systems scale to millions of users, are deeply integrated with real-time data, and are designed to adapt over time. To accomplish this, they employed an inverted model – we’ll it call the “2.0” model – which freely allows users to act, but then holds them accountable through data and accumulated reputation:

Or, put another way, the 1.0 model uses restricted access to achieve policy goals (public safety, etc.), given that enforcement/accountability is expensive and difficult to administer; whereas the 2.0 model instead relaxes market access and then uses large volumes of real-time data to hold actors accountable:

Today, we are facing a conflict between these two regulatory approaches. Since 2007, driven by the explosive growth of mobile apps, 1.0 rules and regulations are being tested by internet-based applications that use their own internal 2.0 regulatory schemes[2]. The most prominent examples have been in transportation (e.g., Uber, Lyft, Sidecar), travel (Airbnb) and health (23andMe[3]), but the problem is quickly spreading to every government jurisdiction and economic sector, and will only get worse as more aspects of our lives are mediated by web and mobile platforms.

As often happens during times of technological change, our instinct is to apply the rules from the previous era to the new technology. For example, when cars were first invented, they weren't allowed to drive faster than a horse and carriage could walk, and required the assistance of a three-man flag crew.[4] But over time, we adapted both our rules and our infrastructure (by building roads, highways, seat belts, etc.) to fit the new technology and the new economic and cultural norms it ushered along.

This paper explores how we could approach public sector regulations "the internet way," by harnessing the inverted approach to regulation that the web allows, and asks readers to consider a single, simple question:

Where can we replace permission-based rules with information-based rules, granting the freedom to operate in exchange for access to data?


In the spring of 2014, the transportation committee of Seattle’s City Council voted on proposed new regulations for “ridesharing” services, such as those dispatched by Uber, Sidecar, and Lyft in the U.S., and and BlaBlaCar in Europe.[5] These new services, which spanned the spectrum from casual “carpool-like” networks to professional on-demand car service networks, were quickly gaining popularity with consumers because of their convenience (namely real-time smartphone-based dispatch and seamless integrated payments), low prices, and broad availability.

Facing the growth of these new “unregulated” transportation models, the city pondered a set of new rules.[6] Of the stated goals of the proposed regulations, the key points were:

“The Council finds that as the use of application dispatch technology by unlicensed companies, vehicles, and drivers raises significant public safety and consumer protection concerns; and

The Council finds that the use of application dispatch technology by unlicensed companies and drivers are competing with existing licensed taxicab and for-hire drivers in the transportation market and causing negative impacts;”

In other words: ensure public safety, and protect the existing taxi industry from new competition.

The proposed regulations outlined a new classification for such services (“Transportation Network Companies” or TNCs), and included a relatively traditional regime of licensing, inspection, insurance, and market-size restrictions, including:

●    Classifying TNC vehicles as “for-hire” vehicles (i.e., the same as taxis or liveries)

●    Limiting total number of TNC cars to 300 citywide (across all platforms)

●    Limiting driving hours per-car to 16 hours per week

●    Requiring drivers to apply for a special permit and pass a special test (in person)

●    Requiring drivers to take a defensive driving course and pass a test

●    Requiring vehicle inspections at state-approved facilities

●    Requiring criminal background checks for drivers

●    Requiring drivers to affiliate with only a single TNC (e.g., a driver can only drive for Lyft or Sidecar, not both)

●    Requiring the TNC company (platform) to physically locate in Seattle

●    Requiring commercial insurance

The proposed regulations above typify the 1.0, permission-based regulatory paradigm. Build a high bar for participation, where new actors (TNC companies, drivers) must prove a lot up front and receive permission to operate. Institute limits to protect the workers and regulate prices in the market. There is nothing wrong with this approach – in fact, it has served our cities well since the dawn of industrialization and urbanization.

In this case, the Transportation Network Companies were considered unregulated because they did not yet conform to this traditional set of licensing and safety standards. Yet, they had gained the trust and loyalty of many customers (in Seattle and many other cities), had built a track record of safety, and had delivered new economic opportunities to both platform operators and the individual drivers. How did they do that?

They did it by developing sophisticated internal systems to establish “trust and safety,” carefully refining the procedures and protocols to ensure that the platforms are safe to use for everyone.  The methods they employ include: 360-degree peer review on every transaction (passengers rate drivers and vice versa); detailed real-time data on pick-up locations, routes, and drop-offs; and integrated credit card payments (to remove the crime risk associated with cash payments). New drivers can onboard quickly and easily (in some cases as easily as submitting a photograph of their license and insurance), and build their reputation over time.

These systems are driven by tremendous amounts of data about both drivers and passengers, and are designed not only to accommodate millions of users simultaneously, but actually to improve as volume increases (imagine a DMV that gets more effective and efficient the busier it gets?).

This approach typifies the "2.0”, information-based trust and safety regime – a system that is both highly “open” with a low barrier to entry, but also highly accountable through the use of applied data. Internet marketplaces, dating back to eBay, have developed and refined this model for ensuring trust, safety, and security (i.e., “regulation”) in a way that is massively scalable and also allows even the smallest actors to participate with minimal initial overhead.

Imagine if eBay had tried to apply a “1.0” trust and safety model when it launched, requiring every seller to pass rigorous licensing tests, submit volumes of paperwork, and visit an eBay inspection site before selling on the platform? That simply wouldn’t have been possible at web scale, in an environment where many, many, many smaller actors are entering the market at high speed.

Coming back to the Seattle example, let’s imagine that instead of applying the 1.0 model to this new situation, they applied the 2.0 model to the entire on-demand transportation sector. What might that look like? Would it be possible? What would be the risks and challenges? What would be the benefits?

Imagine the proposed Seattle regulations instead read as follows:

WHEREAS: Transportation Network Companies have demonstrated new, highly efficient, highly effective ways of regulating for-hire transportation through the application of technology and data;

NOW, THEREFORE, BE IT ORDAINED BY THE CITY OF SEATTLE AS FOLLOWS: Anyone offering for-hire vehicle services may opt out of existing regulations as long as they implement mobile dispatch, e-hailing, and e-payments, 360-degree peer-review of drivers and passengers, and provide an open data API for public auditing of system performance with regards to equity, access, performance, and safety.

That may sound ridiculous, but it’s intended to illustrate that we have two competing regulatory paradigms at play, and that as of this writing; our instinct is to apply the 1.0 paradigm to new scenarios that are emerging as part of the information economy.

To simplify the proposal even more, we come back to the question: where could we trade permission (the 1.0 model) for access to data (the 2.0 model)? This is the question that we should be coming back to at every new regulatory juncture.

Such a change in approach – moving from a licensing model to an information-access model – is at the heart of rethinking regulation for the information era.


The search for trust amidst rapid change, as seen in the Seattle ridesharing example, is not a new thing. It is, in fact, a natural and predictable response to times when new technologies fundamentally change the rules of the game.

We are in the midst of a major technological revolution, the likes of which we experience only once or twice per century. Economist Carlota Perez describes these waves of massive technological change as “great surges,” each of which involves “profound changes in people, organizations and skills in a sort of habit-breaking hurricane.”[7]

This sounds very big and scary, of course, and it is. Perez’s study of technological revolutions over the past 250 years – five distinct great surges[8] lasting roughly 50 years each – shows that as we develop and deploy new technologies, we repeatedly break and rebuild the foundations of society: economic structures, social norms, laws, and regulations. It’s a wild, turbulent, and unpredictable process.

Despite the inherent unpredictability with new technologies, Perez found that each of these great surges does, in fact, follow a common pattern:

First: a new technology opens up a massive new opportunity for innovation and investment. Second, the wild rush to explore and implement this technology produces vast new wealth, while at the same time causing massive dislocation and angst, often resulting in a bubble bursting and a recession. Finally, broader cultural adoption paired with regulatory reforms set the stage for a smoother and more broadly prosperous period of growth, resulting in the full deployment of the mature technology and all of its associated social and institutional changes. And of course, by the time each 50-year surge concluded, the seeds of the next one had been planted.

So essentially: wild growth, societal disruption, then readjustment and broad adoption. Perez describes the “readjustment and broad adoption” phase (the “deployment period” in the diagram above) as the percolating of “common sense” throughout other aspects of society:

“The new paradigm eventually becomes the new generalized ‘common sense’, which gradually finds itself embedded in social practice, legislation and other components of the institutional framework, facilitating compatible innovations and hindering incompatible ones.”[9]

In other words, once the established powers of the previous paradigm are done fighting off the new paradigm (typically after some sort of profound blow-up), we come around to adopting the techniques of the new paradigm to achieve the sense of trust and safety that we had come to know in the previous one. Same goals, new methods.

As it happens, our current “1.0” regulatory model was actually the result of a previous technological revolution. In The Search for Order: 1877-1920, Robert H. Wiebe describes the state of affairs that led to the progressive era reforms of the early 20th century:

Established wealth and power fought one battle after another against the great new fortunes and political kingdoms carved out of urban-industrial America, and the more they struggled, the more they scrambled the criteria of prestige. The concept of a middle class crumbled at the touch. Small business appeared and disappeared at a frightening rate. The so-called professions meant little as long as anyone with a bag of pills and a bottle of syrup could pass for a doctor, a few books and a corrupt judge made a man a lawyer, and an unemployed literate qualified as a teacher.[10]

This sounds a lot like today, right? A new techno-economic paradigm (in this case, urbanization and inter-city transportation) broke the previous model of trust (isolated, closely-knit rural communities), resulting in a re-thinking of how to find that trust. During the “bureaucratic revolution” of the early 20th century progressive reforms, the answer to this problem was the establishment of institutions – on the private side, firms with trustworthy brands, and on the public side, regulatory bodies – that took on the burden of ensuring public safety and the necessary trust and security to underpin the economy and society.[11]

Coming back to today, we are currently in the middle of one of these 50-year surges – the paradigm of networked information – and we are roughly in the middle of the above graph. We’ve seen wild growth, intense investment, and profound conflicts between the new paradigm and the old. 

The question, then, is what is the “new common sense” of the information era, and how can we apply it to the concerns of public regulations?


Perez describes a "new common sense" emerging as part of each subsequent technological revolution. So what is the “common sense” of the information era?

Below I'll describe the two dominant paradigms that have undergirded the design and development of the web, beginning with the core protocols that underlie the internet itself, and continuing through the many platforms (independent, nonprofit, and commercial) that have grown up on top of it.

Decentralized Regulation and the Open Web

Joi Ito, the director of the MIT Media Lab, describes the internet as a "belief system," more than a technology:

“The Internet isn’t really a technology. It’s a belief system, a philosophy about the effectiveness of decentralized, bottom-up innovation. And it’s a philosophy that has begun to change how we think about creativity itself.”[12]

To unpack that a bit: the internet is a "belief system" in that it only works because everyone building its infrastructure has opted-in to the same set of standards and protocols. You do it, and believe that everyone else will do it too, and it works. There is no "central command" controlling the internet – rather, it's a collection of millions of computers and pieces of networking equipment that have decided to speak the same language (the core internet protocols, which we developed collaboratively and no one owns). So, at the most foundational level, to "join" the internet, all you need to do is turn on a server and speak TCP, HTTP, SMTP, and the other core protocols. So, this ethos of "just jump in" without needing to ask permission is literally baked into the web itself.

This model is very different from a centralized, command-and-control style network, where access and participation are approved by a central authority. Contrast this kind of network with the traditional centralized networks we know, such as the original AT&T phone network, today's Cable Television networks, early online services such as Prodigy and AOL, and today’s wireless carriers and mobile platforms.[13]

Understanding the decentralized web requires a real rethinking of how systems, and people, connect. It’s hard to grasp at first, as we are so familiar with the patterns and dynamics of centralized systems. This is a topic worthy of a library’s worth of exploration, so rather than attempt to capture the whole essence of decentralized systems and the open web here; I’ll just highlight a few characteristics that are relevant to the topic of regulation:

❉ Closed Systems → Open Networks (i.e., the role of open standards and protocols)

In an open, decentralized system, there is no central authority determining the rules, granting access, and enforcing order. Instead, there are many systems, heterogeneous in size and makeup; each governed independently, all working together.  

For example, imagine a university network, an independent web hosting company, a guy with a server in his closet, and a giant ISP like Comcast. They are all acting independently, but together; what joins them together and allows them to interoperate are standards. These are the communication protocols and data formats that each of the participating networks – anyone who wants to be part of the broader “internet” – agrees to use. 

The standards that power the web range from the underlying transport and communication protocols (such as TCP, HTTP, and SMTP), to security and encryption protocols (such as SSL/TLS and DNS-SEC) to transaction verification protocols such as the Bitcoin protocol.[14]

In each of these cases, the role of the “governing” institutions that surround the internet is to drive the adoption of new and improved standards that can not only provide new features, but can improve the overall state of trust, safety and security. 

Standards-setting is a negotiation among many stakeholders, not a command-and-control process, and it’s more art than science. But, once adopted, the existence of these standards enables broad innovation, since all of the players (app developers, infrastructure builders, regulators, etc.) know the terms of engagement and can build freely, without needing to seek permission first.

❉ Point-to-point → Broadcast

Related to the notion of “open standards and protocols” is the shift from closed, point-to-point communications, to open, broadcast communications. Generally speaking, coordination and information sharing on the internet defaults to a broadcast model over a point-to-point model, and starting to think in these terms can lead to regulatory solutions more native to the internet.

For a simple example: in 2009, the way to get data about New York City’s bus schedule and routes was to send an information request to the records department. In return, you received a compact disc containing the data. The process took several weeks, and the data was often out-of-date. This “point-to-point” process was not only inefficient and expensive to operate, it didn’t benefit from any community-based review of the data. In 2010, New York City Transit switched to a broadcast model, where they published their data freely online, and hosted a public email discussion group, where developers could ask questions and provide feedback.[15] The result was not only a vibrant, engaged community of developers building useful apps that helped people use NYC’s buses, but also a streamlined data management and publication workflow on the inside of the transit agency.

Generally speaking, the move from a point-to-point model to a broadcast / collaborative model is a difficult one to make, requiring an inversion in many lines of thinking, but it holds the key to unlocking new modes of collaborative problem solving.

The “open” internet has established a new model for distributed collaboration and problem solving, and has also served as a platform upon which a new generation of more centrally managed services. Indeed, there is a natural tension between the decentralized nature of the underlying web and the centralized nature of many of the platforms built on top of it. But lessons can be drawn from the regulatory approaches taken by both, as they illustrate different aspects of what is possible and “common sense” in the information era.


Twice a year, a group of regulators and policymakers convene to discuss their approaches to ensuring trust, safety, and security in their large and diverse communities. Topics on the agenda range from financial fraud, to bullying, to free speech, to transportation, to child predation, to healthcare, to the relationship between the community and law enforcement.

Each is experimenting with new ways to address these community issues. As their communities grow (very quickly in some cases), and become more diverse, it’s increasingly important that whatever approaches they implement can both scale to accommodate large volumes and rapid growth, and adapt to new situations. There is a lot of discussion about how data and analytics are used to help guide decision making and policy development. And of course, they are all working within the constraints of relatively tiny staffs and relatively tiny budgets.

As you may have guessed, this group of regulators and policymakers doesn’t represent cities, states, or countries. Rather, they represent web and mobile platforms: social networks, e-commerce sites, crowdfunding platforms, education platforms, audio and video platforms, transportation networks, lending, banking and money-transfer platforms, security services, and more. Many of them are managing communities of tens or hundreds of millions of users, and are seeing growth rates upwards of 20% per month. The event is Union Square Ventures’ semiannual “Trust, Safety and Security” summit, where each company’s trust and safety, security, and legal officers and teams convene to learn from one another.

In 2010, my colleague Brad Burnham wrote a post suggesting that web platforms are in many ways more like governments than traditional businesses.[16] This is perhaps a controversial idea, but one thing is unequivocally true: like governments, each platform is in the business of developing policies which enable social and economic activity that is vibrant and safe. 

The past 15 or so years has been a period of profound and rapid “regulatory” innovation on the internet. In 2000, most people were afraid to use a credit card on the internet, let alone send money to a complete stranger in exchange for some used item. Today, we’re comfortable getting into cars driven by strangers, inviting strangers to spend an evening in our apartments (and vice versa), giving direct financial support to individuals and projects of all kinds, sharing live video of ourselves, taking lessons from unaccredited strangers, etc. In other words, the new economy being built in the internet model is being regulated with a high degree of success.

Of course, that does not mean that everything is perfect and there are no risks. On the contrary, every new situation introduces new risks. And every platform addresses these risks differently, and with varying degrees of success. Indeed, it is precisely the threat of bad outcomes that motivates web platforms to invest so heavily in their “trust and safety” (i.e., regulatory) systems and teams. If they are not ultimately able to make their platforms safe and comfortable places to socialize and transact, the party is over.

As with the startup world in general, the internet approach to regulation is about trying new things, seeing what works and what doesn’t work, and making rapid (and sometimes profound) adjustments. And in fact, that approach – watch what’s happening and then correct for bad behavior – is the central idea.

So: what characterizes these “regulatory” systems and helps them achieve their goals in new ways?

❉ Permission → Accountability:

The central inversion in internet-style regulation is the flip from "permission" to "accountability." Granting permission, at scale, is enormously time-consuming and expensive. That's why the line at the DMV is always so long. Accountability, on the other hand, can be surprisingly inexpensive and easy to implement when we're living in a world of abundant, connected information.

A challenge here is that switching from permission to accountability involves taking a different approach to risk. A permission-based model intends to minimize risk (not eradicate it!) by carefully selecting who can act and how. An accountability-based model, on the other hand, trades some up-front risk for far broader participation, backed with strict accountability based on data.

Diving in a step further, there are two separate, but equally important points there:

Broad participation: This is a direct result of reducing barriers to participation (aka permission). There is also a direct line to be drawn between broad participation and innovation – more experiments and a broader diversity of participants.

Strict accountability: Since "broad participation" opens the floodgates and invites risk, it only works when it is paired with strict accountability. On the web, this comes in many flavors: if your web server transmits a lot of spam email, other servers will stop trusting you; if you take money on eBay but never ship a product, your rating will reflect that and buyers won't trust you; if your car is smelly and you're grouchy and rude, people won't want to ride with you.

Of course, as activities move from purely online to online and the real world, and as we move into traditionally regulated sectors such as transportation, finance, and healthcare, the stakes get higher. But rather than default back to the permission-based model we're comfortable with, I argue that we should instead look to identify accountability-based models.

❉ Professional → Casual:

Web and mobile platforms have made it easier for individual people to become not just consumers but producers. Setting up a blog, publishing music or a video, offering a seat in your car, renting your apartment for the weekend, teaching a class, giving medical advice – these were all previously the sole domain of licensed and vetted professionals. As such, it made sense to have high barriers to entry in these fields, typically in the form of licensing and permitting regimes. 

But now, if I give someone a lift (and they pay me), am I a taxi driver? If I rent out my apartment for a weekend (for money), am I running a hotel? If I make and sell small-batch wooden spinning tops, am I a toy manufacturer? If I publish how-to videos, am I a university? What was once black and white is now many shades of gray.

Web platforms embrace these shades of gray, building systems that allow amateurs to join in and build up their credentials over time. But this has created a conundrum for public sector regulations, which have historically drawn bright lines between professional and nonprofessional activity, and established formal regimes for managing licensing, permitting, and enforcement of rules.

This is only going to get worse, as web and mobile platforms turn more amateurs into quasi-professionals in more and more fields. In the coming years, we’ll see more education, financial advice, healthcare, transportation services, and many other regulated activities happening in more casual ways, mediated by information-based trust and safety systems.

❉ Big bang → Incremental

Another inversion at the heart of startup and internet culture is the shift from “test, then ship”, to “ship, then learn.” Or, as I mentioned earlier, “release early and often.” Because, in the information age, the cost of distributing changes is close to zero, we can “ship” products and ideas (and even policies) much earlier in the process, and then adjust based on real feedback from the market.

To take a big, expensive example: launched on October 1, 2013, to a nationwide audience of eager insurance customers. A large set of requirements, a single build, insufficient testing, and a huge demand added up to a catastrophic crash that threatened the entire program.[17]

This was, of course, an unfortunate approach. A more “internet native” approach would have been to start with a smaller feature set, a pilot user base or a state-by-state rollout, or even multiple competing implementations “A/B tested” in different markets.

The same applies to policy. Every web platform innovating in the trust and safety space treats their policies the same way they treat their products: launching them, looking closely at the data, outcomes, and feedback, and making adjustments. Such an approach is only possible where distribution is inexpensive, and feedback (in the form of real-time data) is abundant.

❉ Avoid all risk → Learn from mistakes

Building on the idea of moving from big-bang to incremental policy development, an information-centered approach to regulation and policymaking also enables an inversion with regard to risk. 

Traditional regulatory and policy development attempts to minimize as much risk as possible, through advance study, deliberation, and restrictive licensing and permitting regimes. Of course, even then there is risk, but we are comforted by the idea that we’ve anticipated many possible outcomes and mitigated as many expected risks as possible.

In the information era, we can take a different approach, primarily because we can make rapid adjustments after the fact based on what we learn from initial implementations, thereby reducing the cumulative risk exposure by increasing the initial risk exposure.

Again, the crucial ingredient here is information. Taking a more relaxed up-front approach to risk doesn’t work if you can’t instantly learn from any outcomes of your policy. And likewise, the more information we connect to the regulatory apparatus, the more tolerant we can be towards up-front risk.


The big idea at the core of information-era regulation that data is a new regulatory tool, one that can replace permission-based systems (licensing, permitting, etc.). So, when approaching new regulatory situations, the question to continually ask is: “where could we substitute data-based accountability for up-front permission?”

To revisit the original framing, taking this approach doesn’t mean deregulating, but rather, re-regulating:

Below are some specific examples of how we might apply the big idea:


We should explore designing an “alternative compliance mechanism” for data-driven companies that could easily comply with 2.0 model, in exchange for an exemption from existing 1.0-style rules.

For example: as we discussed in the ride-sharing scenario, one could imagine a regime where new entrants in the ride-sharing space could be exempt from traditional taxi regulations in exchange for opening up a real-time data feed that regulators could use to audit system performance for issues such as safety, economic impact, labor impact, etc.

Taking this approach involves clearing at least two major hurdles:

First, most web platforms -- especially large ones with established business models and large user bases -- will be hesitant to share data with regulators.  Such an exchange can only work if there is real incentive to avoid traditional regulations.

Second, many regulatory bodies do not have the in-house technology or skillsets to manage manage, analyze and react to real-time data coming from data-first actors.  This strikes me as an opportunity for government-facing entrepreneurs to build such systems, for trust and safety teams at web platforms to open source or spin out the technologies they’ve been building to monitor such issues, and also for third party developers to build analytic tools on top of this open data.

❉ Personal data as individual empowerment

It’s important to remember that using data to regulate doesn’t just mean requesting government access to data. It could mean strategically opening up access to data in other key places. 

For example, to revisit the ride-sharing example discussed above, what if regulators were to simply require that drivers be able to access the data that they’ve produced within the system (trip records, passenger ratings, earnings, etc.).  This simple, targeted intervention would immediately give drivers much more leverage in their relationship with the platforms, and would increase competition in the ecosystem. For more information on this specific idea, see Albert Wenger’s “Right to an API Key (Algorithmic Organizing)[18].   Such an approach could obviate the need for more complex labor regulations.

Generally speaking, this approach is not a new idea for policymakers, who have long traditions of using disclosure and transparency as regulatory tools. But in the era of connected information, with information flowing in more directions, more quickly, with more tools available to process and respond to it, the power of that data is more important than ever, as is its structure.

❉ Sandboxes made of data: safe harbors and incentives to register

The idea of high-information regulation only works if there is an exchange of information!  So the question is: can we create an environment where startups feel comfortable self-identifying, knowing that they are trading freedom to operate for accountability through data.  Such a system, done right, could give regulators the needed lead time to understand a new approach, while also developing a relationship with entrepreneurs in the sector.  Entrepreneurs are largely skeptical of this approach, given how much the “build an audience, then ask for forgiveness” model has been played out.  But this model is risky and expensive, and now having seen that play out a few times, perhaps we can find a more moderate approach.

Nick Sinai, former deputy CTO of the United States and now a fellow at the Harvard Kennedy School, describes this as “Sandboxing And Smart Regulation In An Age of A/B Testing” and gives an example of this approach applied to drone policy:

“As an example, suppose the FAA stipulated that for the next few years, companies are free to fly drones, under 200 ft, in the state of Texas, for agricultural applications – as long as they exhibit common sense and due caution. The lessons from a limited use case would then inform the FAA’s ongoing rulemaking efforts – likely making the final rules simpler and better.”[19]

There are two key points here: first, that we should give ourselves as much opportunities to experiment with new technologies and approaches as we can. This is the essence of “innovation.” And second, if done correctly, sandboxed policymaking can also dramatically improve the quality of any ultimate rules.

❉ Extend peer-to-peer regulation systems to address externalities and improve public safety

A criticism of many web-driven trust and safety schemes is that they only consider internal trust and safety issues, not externalities (such as neighbor complaints or reduced access to affordable housing in the context of home-sharing activity).

Again, here, we should look for internet-model solutions to these new problems. For instance, in the home sharing example, how might third parties such as neighbors be invited to participate in the peer feedback system? How might they be incentivized as actors in the system? What data might we look to require from operating platforms in exchange for the continued freedom to operate?

For example, the startup MeWe[20] is developing a peer-to-peer inspection tool called CoInspect that allows individuals to fill out web- and mobile-based inspection forms that are collected, analyzed, and reported back to regulators at the local level.  Imagine a scenario where every home-sharing guest filled out a one-question survey at the end of their stay, answering questions such as “was there a smoke alarm installed -- instead of a single building inspector filling out a 300-point survey once every 5 years, millions of guests would fill out a single-point survey every day.  Such an approach turns what can be seen as a liability (guests staying in non-commercial residences) into an opportunity to extend the reach of the city’s public safety regime.


Finally, how might we apply this "internet model" regulation retroactively to existing market participants? One argument against flexibility in regulatory schemes is that it's unfair to existing market participants who have invested in the previous regime and played by its rules (e.g., licensed taxi drivers and existing medallion owners). 

This is a fair argument. But rather than apply the old rules to technology- and information-driven new entrants, we should look for opportunities to “regulate down” and incentivize adoption of information-era regulatory approaches by industrial-era market participants.


The arrival of the information age has already delivered amazing new resources and tools, in the form of abundant information and vast access to the world’s collective human capability. 

It has also delivered a profound set of new challenges for policymakers and regulators, as many of the assumptions that our laws and policies were based on have changed. 

Luckily, among the tools delivered by the information age are new approaches to regulating human behavior – i.e. establishing trust, safety, security, and positive social outcomes. These new approaches are built on the creative, and sometimes counterintuitive, use of information.

The next challenge for regulators is not merely how to respond to information-era operating models, but also how to reinvent the regulatory process to harness the same data-first approaches that are powering these new models.


Thank you to the following people whose ideas and feedback improved this paper: Rit Aggarwala, Brad Burnham, Jessica Casey, Faiza Issa, Brittany Laughlin, Max Pomeranc, Nick Sinai, Mitch Weiss, Albert Wenger, Fred Wilson, and Aaron Wright.



[1] See Albert's recent TEDx talk on Inversions and public policy (, and his blog, Continuations (

[2] For more detail, see Max Pomeranc’s excellent 2014 paper, “Regulation and the Peer Economy: a 2.0 Framework” ( Disclosure: I advised Max on this paper.

[3] See the FDA’s warning letter to 23andMe:

[4] See the UK Locomotive Acts of 1861 and 1865 ( Hat tip to Albert Wenger for the example.

[5] “Seattle floats new ridesharing rules: 300 driver permits; 75 additional taxi licenses”, Geekwire, Feb. 11, 2014.

[6] The final rules passed in Seattle in July, 2014 were a compromise position, removing the 300 car limit for TNCs, expanding insurance requirements, and increasing the supply of traditionally regulated taxis.

[7] Perez, Carlota:“Technological Revolutions and Financial Capital: The Dynamics of Bubbles and Golden Ages” (, Page 4

[8] (1) The Industrial Revolution, from 1771-1829; (2) The age of Steam: 1829-1875; (3) the Age of Steel, Electricity and Heavy Engineering: 1875-1908; (4) the Age of Oil, Automobiles and Mass Production: 1908-1971, and finally (5) the Age of Information and Telecommunications: beginning with the invention of the first microprocessor, by Intel in 1971

[9] Perez, page 18

[10] Wiebe, Robert H: “The Search for Order: 1877-1920” (

[11] Hat tip to Rit Aggarwala for pointing out this historical parallel and leading me to Wiebe’s work.

[12] “In an Open-Source Society, Innovating by the Seat of Our Pants”, NY Times, Dec 5, 2011.

[13] Astute readers will note that many of today’s application platforms such as Facebook at Twitter are centralized, rather than open, systems.  This is true, but I distinguish here between platforms that stand between us and the internet (Prodigy, iOS), and platforms that live on top of the open internet (Twitter, Facebook, etc.), and address web-based platforms in the next section.

[14] TLS:; DNSSEC:; For fun, check out Satoshi Nakamoto’s original white paper describing the Bitcoin protocol:







About the Author

Nick Grossman

Nick Grossman is a technologist & entrepreneur focused on the intersection of the web and urban, social, and civic systems. For the past 10 years, he has developed software and media products, advocacy efforts and internet-based businesses that help cities and the internet work better together. He works at Union Square Ventures and is an advisor to the Data-Smart City Solutions project.



comments powered by Disqus