Ultra-large-scale marketing operations

ultra-large-scale marketing
A couple of major trends in software development — in particular, open source collaboration and the design of social network/user-generated content platforms — may provide useful insight for the future of marketing management.

After all, the increasing number of marketing channels and the increasing granularity of initiatives in them combine to form ultra-large-scale marketing environments that share similar properties to ultra-large-scale (ULS) software systems.

It’s not coincidental that many of the leading web sites whose value is derived from crowdsourcing and peer production — MySpace, Facebook, YouTube, Wikipedia, Twitter — are also at the heart of the digital marketing maelstrom. The underlying forces are the same, and they feed each other. Where would Twitter be without the championship of so-called social media marketers? Where would social media marketers be without the latest infusion of attention to their mission that Twitter has brought?

This weekend, I was reading a presentation called The Metropolis Model: A New Logic for System Development by Rick Kazman of Carnegie Mellon University. Although it’s written in the context software development for ultra-large-scale systems such as Facebook, many of the characteristics he describes should resonate with digital marketers:

  • mashability
  • conflicting, unknowable requirements
  • continuous evolution
  • focus on operations
  • open teams
  • sufficient correctness
  • unstable resources
  • emergent behaviors

At a certain scale, the closed-loop, top-down structures of old-school software — and old-school marketing for that matter — simply don’t cut it. Instead, one has to design mechanisms for enabling broad participation and co-creation, yet do so without losing the core integrity of the system as a whole, its gravitational center.

Some of the principles that Kazman recommends make sense in software and marketing:

  • egalitarian management — encouraging a broad swath of involvement
  • bifurcation — tight control of the “core”, openness in the periphery
  • fragmented implementation — many independent participants contribute
  • distributed testing — get many people involved in quality control
  • ubiquitous operations — the system is “always on”, always evolving

In reading Kazman’s work, I followed a trail back to a 2006 report by Kazman and numerous collaborators on Ultra-Large-Scale Systems: The Software Challenge of the Future. Written for the U.S. Department of Defense (DoD), advising them on their goal of “information dominance”, it has a funky yet fascinating twist if you substitute “systems” and “software” with “marketing” and swap “DoD” with “your company”.

Again, the areas they recommend addressing in their research agenda seem highly relevant to marketing:

  • human interaction around socio-technical systems
  • computational emergence using game theory and digital evolution
  • design that encompasses individuals, organizations, and software
  • adaptive system infrastructure for massive decentralization
  • adaptable quality control in the face of continuous change

Have computer science and marketing already converged, but they just don’t realize it yet?

Get chiefmartec.com directly in your inbox!

Subscribe to my newsletter to get the latest insights on martech as soon as they hit the wire. I usually publish an article every week or two — aiming for quality over quantity.

This field is for validation purposes and should be left unchanged.

2 thoughts on “Ultra-large-scale marketing operations”

  1. What role does large data in these large scale models, particularly as they apply to marketing systems?

  2. That’s a great question.
    Ultra-large-scale data is clearly a topic all its own. There are two trends that are at the intersection of that with marketing that I find particularly interesting:
    1. The deluge of data seems to now be one of the drivers behind making semantic web efforts real. Data is a silo is nowhere near as valuable as data that can be linked into the rest of the universe. And the ability to link data with casual associations (i.e. RDF) rather than strict schema coordination makes linked data a whole lot more feasible. But it is a big paradigm shift from the way most people think of data management.
    2. The rise of “alternative” database architectures for large-scale systems, such as Amazon S3 and Google BigTable, are enabling much bigger data sets to be efficiently harnessed — and are also helping to change the way we think about data in the context of these large-scale applications.
    And most of these initiatives are still very early in their evolution. I’m looking forward to seeing how they advance over the years ahead.

Leave a Comment

Your email address will not be published. Required fields are marked *