This article takes a slightly different direction than previous ones. Rather than diving into purely technical topics, we’ll explore a set of empirical observations—often referred to as “laws”—that have repeatedly emerged in software engineering. While the field is filled with many such laws, in this article, I’ll focus on five that I personally find useful.
Conway’s Law
In my opinion, the first and probably the most important law is Conway’s Law. As the computer scientist Melvin Conway originally said:
Organizations, who design systems, are constrained to produce designs which are copies of the communication structures of these organizations.
Conway's Law may initially seem a bit random for engineers who haven’t yet worked in larger teams or organizations. At least, that’s how I felt when I first heard about it early in my career. However, something about how information flows within a company—how projects are planned and how teams are structured—makes Conway’s Law highly prevalent in real-world scenarios. The most common two extreme examples of Conway’s law are:
Large single team → monolithic architecture. When a company has a single, large engineering group, it often produces a monolithic architecture.
Very small, fragmented teams → anemic1 microservices. Conversely, when a company has many small teams with loosely defined domains and responsibilities, the resulting architecture often comprises numerous anemic microservices.
Between these two extremes lies a broad spectrum of possible outcomes—and it’s not merely about the size of your team/organization. Teams with well-defined business responsibilities and clear ownership tend to produce highly cohesive services. In contrast, if multiple teams share overlapping responsibilities, expect duplicated domain logic across different services because “shared” functionality rarely emerges without dedicated coordination. Likewise, if there is a specific communication dependency among teams, the resulting service dependency graph will often mirror those same lines of communication, even if not required from the technical point of view.
Conway’s Law is so prevalent in the industry that many product and team management books recommend employing an “inverse-Conway maneuver”2—reshaping engineering teams or entire departments to reinforce a desired software architecture outcome.
Hyrum’s Law
Hyrum’s Law was coined by software engineer and computer scientist Hyrum K. Wright:
With a sufficient number of users of an API, it does not matter what you promise in the contract: all observable behaviors of your system will be depended on by somebody.
This law offers an insightful observation about large-scale systems. Once a system reaches a certain scale, there will always be some consumers who rely—explicitly or implicitly—on the exact implementation details, rather than strictly adhering to the defined interface. Wright himself drew inspiration for this concept from personal experience:
I'm a Principal Scientist at Adobe, and before that, a software engineer at Google. I work on large-scale code change tooling and infrastructure, and spent several years improving Google's core C++ libraries. The above observation grew out of experiences when even the simplest library change caused failures in some far off system.
Another noteworthy example appears in an article published last year, in which Abenezer Belachew pointed out how the Golang community accounts for this law when maintaining the language:
func (e *MaxBytesError) Error() string {
// Due to Hyrum's law, this text cannot be changed.
return "http: request body too large"
}
Like Conway’s Law, Hyrum’s Law frequently emerges in industry practice once software projects grow beyond a certain scale.
Goodhart’s Law
Originally coined by economist Charles Goodhart in the context of monetary policy, the law states:
Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.
However, it is often rephrased more broadly as:
When a measure becomes a target, it ceases to be a good measure.
It is likewise highly relevant to software engineering and applies broadly across many other fields. For instance, focusing only on increasing code coverage can prompt developers to write superficial tests that do little to improve overall quality. Similarly, tying performance reviews to the number of tickets closed can encourage teams to tackle only easy or trivial issues, while more complex bugs or technical debt remain unaddressed. A more serious example emerges in large-scale system design: if “architecture compliance” is measured purely by whether certain frameworks or patterns are adopted, engineers may implement them just to check boxes, ignoring long-term maintainability and best-fit considerations. In each scenario, the original intention behind the metric—ensuring robust, high-quality software—ends up overshadowed by the pressure to meet a specific target.
Jakob’s Law
Coined by web usability consultant and human-computer interaction researcher Jakob Nielsen:
Users will anticipate what an experience will be like, based on their mental models of prior experiences on websites. When making changes to a design of a website, try to minimize changes in order to maintain an ease of use.
This principle was originally formulated for website UX, but I believe it has broader applications. In my opinion, Jakob’s Law illustrates how principles of human behavioural psychology manifest in software engineering. Given typical user expectations—whether you’re designing a website UI, exposing an HTTP API, or publishing an open-source Python library—you can expect resistance or dissatisfaction if your software diverges from popular, established patterns or workflows, even if those patterns aren’t inherently better (or may even be worse).
Linus’s Law
Coined by software engineer and open-source advocate Eric S. Raymond in honour of Linus Torvalds:
Given enough eyeballs, all bugs are shallow.
In his essay “The Cathedral and the Bazaar”, Raymond contrasts two approaches to free software development. The “cathedral” model restricts source code to a small group of developers until each formal release, while the “bazaar” model makes it publicly available throughout the development process. Raymond credits Linus Torvalds, creator of the Linux kernel, for pioneering the bazaar approach. The essay’s central thesis—nicknamed “Linus’s Law”—is that when more people can view and test the source code, bugs are discovered and fixed more quickly. In other words, “given enough eyeballs, all bugs are shallow.” This law is particularly relevant to cryptographic algorithms and security software, where open scrutiny is crucial for uncovering potential vulnerabilities. By contrast, companies that tout “proprietary” solutions in this area may be relying on secrecy rather than robust peer review—an approach that can leave hidden flaws unaddressed.
An anemic microservice is a microservice which offers minimal business value due to its lack of domain logic.