For decades, placemaking in Britain has been governed as much by judgment as by methodology. Practitioners speak of “character”, “vibrancy”, and the elusive “feel” of a streetscape; qualities refined through experience, human use, and professional instinct rather than formalised metrics. The accomplished practitioner was often the one who had seen enough places to recognise what worked, even when the causal mechanisms remained partly intangible.

In this sense, placemaking has long resembled architecture itself: a synthesis of art, social science, economics, and regulation, mediated through professional judgement rather than deterministic rules. Yet the conditions under which places are conceived and delivered have shifted fundamentally.

Post-pandemic working patterns have destabilised long-standing assumptions about commuting, retail demand, and office viability. Biodiversity Net Gain requirements impose quantifiable ecological obligations. Net-zero commitments extend accountability across entire asset lifecycles. Meanwhile, a chronic housing shortage demands both acceleration and precision in delivery, even as capital becomes more risk-averse and planning processes more contested.

In such an environment, the margin for intuition unsupported by evidence has narrowed sharply. What is emerging is not the replacement of professional judgement, but a new evidential paradigm: the science of place.
Artificial intelligence does not eliminate human insight; it augments it, transforming placemaking from a discipline grounded in experience into one informed by behavioural data, environmental modelling, and predictive analysis at unprecedented scale.

The long evolution of placemaking: From geometry to behaviour

To understand the significance of this shift, it is worth situating it within the deeper history of urban form.

Placemaking has always oscillated between two conceptions of the city: as an engineered system and as a living organism.

Ancient settlements were shaped primarily by defence, trade routes, and topography. Roman urbanism imposed stricter geometric order through grids, forums, and infrastructure; expressions of imperial authority rather than organic growth. Medieval towns evolved more organically, responding to craft economies, walkability, and local geography long before such considerations were codified.

The Industrial Revolution disrupted this equilibrium. Rapid urbanisation generated overcrowding, pollution, and public health crises, prompting reform movements that emphasised sanitation, use restrictions, and municipal-level infrastructure. Britain’s own Garden City movement, led by Ebenezer Howard, represented an early attempt to reconcile efficiency with wellbeing, a conceptual precursor to contemporary sustainability discourse.

Twentieth-century planning then diverged sharply. Modernism, spurred by the proliferation of private car ownership, privileged rationality, functional separation, and large-scale intervention. Post-war reconstruction in the UK embraced system-built housing, arterial road networks, and land use frameworks designed to maximise throughput and growth. Many schemes delivered homes at scale but struggled to foster social cohesion or local identity. By the late twentieth century, a consensus began to emerge: physical form alone does not determine success; human behaviour does. This recognition laid the intellectual foundations for modern placemaking and for the debates that continue to shape it.

Moses vs. Jacobs: The battle for the street

The mid-twentieth-century struggle between Robert Moses and Jane Jacobs remains the defining allegory of urban planning.

Moses embodied technocratic modernism. Viewing the city from above, he approached it as a logistical system to be optimised for movement and efficiency. Expressways, large-scale housing schemes, and infrastructure megaprojects reflected a conviction that expert-led planning could engineer progress through rational design.

Jane Jacobs offered a fundamentally different perspective. Observing cities at street level, she argued that vitality emerges from density, diversity, and informal interaction; the “sidewalk ballet” of everyday life. Safety, economic activity, and social cohesion were not products of grand design but of intricate local ecosystems that resist simplification. Jacobs did not reject planning; she rejected reductionism.

In the UK, her perspective has shaped policy rhetoric for decades. Contemporary planning frameworks emphasise walkability, mixed use, public realm, and community engagement. Yet delivery mechanisms often remain constrained by what can be quantified. Transport models measure vehicle flows with precision. Cost-benefit analyses monetise infrastructure investments. But the value of sociability, belonging, or a pleasant public realm has historically been difficult to express in defensible economic terms.

Consequently, decisions frequently favour what is measurable over what is meaningful. Artificial intelligence alters this balance. Behavioural data at scale now enables planners to quantify patterns of movement, dwell time, environmental comfort, and social interaction, providing empirical support for insights that were once largely qualitative.

The regeneration of King’s Cross in London illustrates the significance of this shift. The district’s success derives not only from architecture but from a meticulously curated public realm designed to encourage lingering, interaction, and repeated use. Granary Square operates as a civic theatre: playground, event space, lunchtime refuge, and evening destination in equal measure. Its vitality reflects precisely the multi-functional complexity Jacobs championed. AI tools can now simulate such behavioural dynamics before construction, allowing developers and authorities to test whether proposed spaces will sustain diverse patterns of use across time and season.

The organic machine: Wright, Gehl, and the human-scale city

Between Moses’ mechanistic modernism and Jacobs’ humanist critique lies a third tradition: the attempt to reconcile technological progress with organic urban life. Frank Lloyd Wright’s Broadacre City envisioned decentralised communities integrated with landscape, enabled rather than dominated by technology. Wright believed the machine could liberate individuals from industrial urban constraints, facilitating a more humane spatial order.

Although largely theoretical, this vision resonates in contemporary Britain. Remote work, digital connectivity, and distributed services are already reshaping settlement patterns, blurring the boundaries between urban and suburban life. Jan Gehl later translated such human-centred philosophy into operational principles, focusing on the “eye-level city” experienced by pedestrians moving at walking speed. His work demonstrated that small-scale design decisions, façade articulation, seating, lighting, and permeability profoundly influenced behaviour.

Artificial intelligence now provides the analytical capacity to operationalise these insights at scale. Rather than imposing order from above, intelligent systems can model emergent behaviour from below, simulating how people actually inhabit space.

Heritage-led regeneration schemes such as Battersea Power Station exemplify both the promise and complexity of this approach. The retained industrial landmark anchors the development in collective memory, while the new public realm seeks to create a vibrant urban quarter. Yet the project also reveals tensions between destination-making and everyday liveability, tensions that AI-enabled modelling could help reconcile by optimising crowd flows, retail mix, transport demand, and environmental comfort so that such places function as communities rather than merely attractions.

The Gehl Test: Quantifying the “unmeasurable”

Historically, evaluating that human-scale quality required painstaking observation. Teams manually counted pedestrians, mapped desire lines, and recorded how public spaces were used across time. Computer vision transforms this process.

Sensors and CCTV feeds can now analyse pedestrian flows across seasons and dayparts, patterns of lingering versus transit, social clustering and informal gathering points, use of amenities such as seating and shade, and accessibility constraints affecting different user groups.

Gehl’s own research extended well beyond movement counts. Through systematic field experiments across Copenhagen and other European cities, he examined the cumulative effects of what might be termed visual pollution: excessive signage, street clutter, traffic engineering artefacts, poorly coordinated lighting, and competing visual stimuli that fragment the pedestrian experience. His findings suggested that such elements do not merely affect aesthetics; they materially reduce perceived comfort, legibility, and willingness to linger. Conversely, environments with coherent sightlines, restrained signage, and active frontages encourage slower movement, social interaction, and a stronger sense of place.

Gehl also challenged the orthodox twentieth-century doctrine of strict functional separation, the division of streets into discrete zones for vehicles, cyclists, and pedestrians, and of districts into single-use enclaves. His work on shared space principles argued that carefully designed ambiguity can enhance safety and sociability by encouraging users to negotiate space through eye contact and behavioural cues rather than relying solely on signals and barriers. The redesign of Exhibition Road in London provides a prominent UK example: by removing kerbs, conventional traffic markings, and rigid segregation, the scheme created a unified surface accommodating pedestrians, cyclists, and vehicles within a slower, more attentive environment. While not without controversy, it demonstrates how subtle design interventions can recalibrate behaviour without heavy-handed enforcement.

Artificial intelligence now enables such qualitative insights to be tested quantitatively. Computer vision can assess how people navigate shared environments, where hesitation occurs, how visual clutter affects movement patterns, and whether redesigned streets genuinely encourage longer dwell times or safer interactions. In effect, AI allows planners to move beyond anecdotal evidence toward measurable behavioural outcomes.

Particularly valuable is the identification of micro-nodes of activity, locations that consistently attract people despite appearing unremarkable on the plan. Such nodes often underpin commercial success and social vibrancy. For developers, this reduces uncertainty by aligning investment with demonstrated demand. For local authorities, it strengthens the case for targeted public realm improvements. Placemaking evolves from designing for hypothetical users to learning from real behaviour.

The ongoing evolution of Queen Elizabeth Olympic Park underscores the importance of this capability. Conceived with long-term legacy ambitions, the area has adapted as actual usage patterns diverged from initial projections. AI-driven post-occupancy analysis could support continuous recalibration, transforming placemaking from a one-off intervention into an adaptive, iterative process, responsive to changing demographics, lifestyles, and climate conditions.

Global lessons: Borrowing brilliance

International precedents demonstrate how data-driven approaches can enable politically ambitious urban interventions, but they also reveal a growing divergence in the philosophies underpinning city-making.

Singapore’s digital twin models microclimates, allowing planners to mitigate heat islands and wind tunnels before construction, a critical capability in high-density tropical environments where environmental comfort directly determines street-level viability. Barcelona’s Superblocks relied on sophisticated traffic modelling to demonstrate that reallocating road space to pedestrians would not produce systemic congestion, enabling policymakers to pursue people-centred strategies with confidence. Across much of continental Europe, similar initiatives reflect a broader shift away from car-dominated planning toward compact, walkable urbanism that prioritises public realm, mixed use, and everyday liveability.

This trajectory represents, in part, a reaction against the large-scale, top-down interventions of the mid-to-late twentieth century. Many European cities experimented with modernist megastructures, elevated road systems, and functional zoning before gradually rediscovering the economic and social value of fine-grained urban fabric, architectural continuity, and human-scale streets. Today, policy frameworks in cities such as Paris, Copenhagen, and Vienna increasingly emphasise 15-minute neighbourhoods, active transport, adaptive reuse, and community infrastructure, approaches that align closely with the Jacobs–Gehl tradition.

In contrast, parts of the Gulf region, including the UAE and Saudi Arabia, continue to pursue a more centralised model of urban development. Projects in Dubai, Abu Dhabi, and Riyadh are often conceived at metropolitan or even national scale, driven by state-led investment and executed through heavy top-down masterplans that prioritise imposing architecture, global visibility, and rapid delivery. Starchitects are commissioned to produce landmark structures that signal ambition and modernity, whilst entire districts are delivered in compressed timeframes rarely achievable within European planning systems.

Such developments can achieve extraordinary coherence and infrastructure integration, but they also carry risks associated with top-down placemaking: limited organic evolution, uncertain long-term community formation, and potential misalignment between design intent and everyday use. The challenge is not one of technical capability; many Gulf projects deploy cutting-edge modelling, digital twins, and environmental engineering, but of behavioural calibration. Monumental scale and architectural spectacle do not automatically translate into street-level vitality.

Artificial intelligence may ultimately serve as a bridge between these paradigms. In rapidly built environments, AI-driven post-occupancy analytics can reveal how residents and visitors actually inhabit newly created districts, enabling adjustments to programming, transport, public space, and land use over time. In established European cities, the same tools can support incremental transformation without sacrificing heritage or continuity.

For the UK, which increasingly occupies a middle ground between these approaches, the lesson is not to emulate any single model wholesale, but to combine strategic ambition with human-scale sensitivity. Britain’s historic urban fabric, complex governance structures, and public expectations favour evolutionary rather than revolutionary change. Yet the scale of housing demand and infrastructure renewal required over the coming decades will necessitate more coordinated action than traditional incrementalism alone can deliver.

Closer to home, emerging initiatives suggest a cautious move toward such synthesis. Greater Manchester’s integration of transport analytics, digital infrastructure, and data sharing across authorities represents one of the most advanced attempts to apply systems thinking at a regional scale. By modelling relationships between mobility, employment distribution, and housing supply, city-regions can direct investment toward locations where it yields the greatest social and economic return, whilst preserving the qualities that make places liveable.

In this context, AI does not prescribe a single urban future. Rather, it equips decision-makers with the capacity to test competing visions, from high-density flagship developments to fine-grained neighbourhood regeneration, against measurable outcomes. The most successful cities of the coming decades are likely to be those that balance strategic scale with human experience, technological sophistication with cultural continuity, and ambition with adaptability.

The UK delivery challenge: Fragmentation and risk

Despite these opportunities, structural constraints remain formidable. Planning authority fragmentation, inconsistent data standards, and resource limitations hinder widespread adoption. Many local planning departments lack the capacity to interrogate sophisticated modelling outputs, creating an asymmetry between well-resourced private actors and public institutions.

Moreover, the UK’s discretionary planning system emphasises negotiation rather than rule-based certainty. Whilst this flexibility allows context-sensitive decisions, it also introduces unpredictability that discourages innovation. AI could mitigate this uncertainty by providing shared evidential frameworks, but only if the capability is developed on both sides of the public-private divide.

There is, however, a more fundamental opportunity that fragmentation and resource constraints obscure: the partial automation of the planning process itself. A significant proportion of planning applications: householder extensions, minor alterations, changes of use within established parameters, involve limited material impacts that are nonetheless subjected to the same committee-based deliberation as far more consequential schemes. The result is a system in which trivial decisions consume disproportionate time and resources, whilst significant applications queue behind them. AI-driven assessment tools, trained on planning policy, environmental data, and precedent, could handle such cases with greater consistency and speed than any committee, freeing planning officers and elected members to concentrate on decisions that genuinely warrant human judgment. Evidence-based automation of this kind would not diminish democratic accountability; it would sharpen it by ensuring that scrutiny is reserved for those moments where it truly matters.

Perhaps the most persistent barrier to development is not technical feasibility but trust. Communities often view consultation exercises sceptically, whilst developers fear that objections may reflect vocal minorities rather than representative sentiment. Natural language processing offers a mechanism to bridge this gap. By analysing large volumes of consultation responses, AI can identify common themes, priorities, and concerns, ensuring that decision-makers engage with the collective voice rather than isolated extremes. Communities could also benefit from interacting with natural language models (chatbots) that would ask more pertinent questions than the current structured interviews, which fail to capture nuance or underlying thinking. Similarly, AI-driven simulations of sunlight, noise, traffic, and infrastructure demand can shift discourse from speculative fears to evidence-based discussion.

Used transparently, such tools may strengthen democratic legitimacy rather than undermine it. In this context, AI functions not merely as analytical support but as institutional infrastructure for decision-making.

Performance, accountability, and the ethics of the algorithm

Future placemaking will be inseparable from environmental performance. AI can optimise orientation for daylight and energy efficiency, model biodiversity outcomes, predict flood risks, and manage urban ecosystems dynamically. Integration with sensor networks enables continuous monitoring rather than one-off compliance assessments. For investors bound by ESG commitments, such capabilities transform sustainability from narrative aspiration into operational reality. Deploying these capabilities responsibly, however, demands as much attention as developing them.

With analytical power comes ethical responsibility. Algorithmic bias poses real risks: systems trained on historical data may inadvertently reproduce inequalities, allocating fewer resources to areas that have historically been underserved.

Privacy considerations are equally significant. Monitoring public space must not evolve into surveillance. Robust governance frameworks, anonymisation, and transparency are essential prerequisites, not afterthoughts.

Human oversight remains indispensable. AI can generate optimised solutions according to defined parameters, but determining those parameters is fundamentally a societal choice. Placemaking ultimately reflects human values, not merely computational efficiency.

Towards the age of empathy

Paradoxically, the rise of artificial intelligence may enable more human-centric cities. By automating technical complexity, from environmental modelling to transport forecasting, professionals can devote greater attention to heritage, identity, aesthetics, and social cohesion. The qualities that make places meaningful are precisely those least amenable to algorithmic optimisation.

Jan Gehl’s enduring metaphor remains apt: a successful city resembles a successful party, people stay because they want to, not because they must. The science of place does not extinguish urban magic; it renders it less accidental and more deliberate.

For the UK property sector, the implications are profound. Artificial intelligence offers a means to reconcile scale with sensitivity, growth with liveability, and economic imperatives with social value. The decisive question is no longer whether AI will shape Britain’s towns and cities, but how, and under whose stewardship.

Those who master this integration will shape the next generation of places. Those who do not may find themselves designing for a past that is already receding.