Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Low level is easy (2008) (yosefk.com)
153 points by yagizdegirmenci on July 29, 2021 | hide | past | favorite | 101 comments


There was once a programmer who was attached to the court of the warlord of Wu. The warlord asked the programmer: "Which is easier to design: an accounting package or an operating system?"

"An operating system," replied the programmer.

The warlord uttered an exclamation of disbelief. "Surely an accounting package is trivial next to the complexity of an operating system," he said.

"Not so," said the programmer, "When designing an accounting package, the programmer operates as a mediator between people having different ideas: how it must operate, how its reports must appear, and how it must conform to the tax laws. By contrast, an operating system is not limited by outside appearances. When designing an operating system, the programmer seeks the simplest harmony between machine and ideas. This is why an operating system is easier to design."

The warlord of Wu nodded and smiled. "That is all good and well, but which is easier to debug?"

The programmer made no reply.

– The Tao of Programming, 3.3


>, an operating system is not limited by outside appearances. When designing an operating system, the programmer seeks the simplest harmony between machine and ideas. This is why an operating system is easier to design."

I think I understand the intuition behind that... in other words... coding an os kernel seems to have less entropy, less degrees-of-freedom, less subjectivity ... than frontend programming like painting GUI pixels and Javascript-framework-of-the-month.

But there's a lot of subjectivity and philosophical debates about low-level os design:

- the famous "Worse is Better" essay tries to explain difference between "New Jersey approach" (Bell Labs UNIX) vs MIT approach of error handling in a system routine [1]

- famous debate between Linus Torvalds and Andrew S. Tanenbaum about microkernels vs monolithic kernels [2]

- David Cutler (designer of VMS & Windows NT) criticizing UNIX i/o architecture[3]

- even lower level than o/s is cpu design where some criticize RISC-V not having arithmetic overflow traps in the minimum base specification. And in the 1990s, the RISC (MIPS philosophy) vs CISC (Intel philosophy) was a big debate.

[1] https://dreamsongs.com/RiseOfWorseIsBetter.html

[2] https://en.wikipedia.org/wiki/Tanenbaum%E2%80%93Torvalds_deb...

[3] https://retrocomputing.stackexchange.com/questions/14150/how...


> I think I understand the intuition behind that... in other words... coding an os kernel seems to have less entropy, less degrees-of-freedom, less subjectivity ... than frontend programming like painting GUI pixels and Javascript-framework-of-the-month.

I don't think that's the right takeaway. A decade ago I worked as an SAP consultant, and did some payroll projects in it. The display part of it was easy. All they wanted was a great big table display, the moral equivalent of a read-only Excel spreadsheet. No faffing about with CSS or Javascript or whatever.

No, the real complexity was the freaking business logic. Office workers, executives, contractors all had different rules for how their pay was calculated, there were all sorts of line items that applied to different classes of employee differently. The company had spent years building those reports by hand and there was a whole lot of organisational knowledge hidden in their heads, corner cases and exceptions all over the place. The sort of thing you can navigate in your head relatively easily if you've been doing it for years on end, but a newcomer has no chance of understanding - so a lot of iteration was involved in getting where we needed to be.

Put fiddly business logic and rapid iteration together, and you're burying yourself under a massive pile of spaghetti code in no time, flat. Trying to keep all of that sane both for myself and for whomever came after me was challenging in ways that none of the more systems-y work I've done since has ever replicated.


Yes. This is why the much-maligned (around here) category of “enterprise software” is so hard and why the results are often kind of shitty.


Perhaps another way to describe this intuition is that with operating systems, the developer is also the subject matter expert and the one setting the requirements. When you are the one setting requirements, those requirements are going to seem inherently less arbitrary than requirements set by someone else.


Many are the sects who pursue the Way, and countless their contradictions! Those who write accounting apps, in contrast, are just hella boned by uninteresting complexity.

In my experience OS kernels just make more sense than most front-end code, regardless of the kernel design. There may be different plans for each kernel, but there is a plan. Major things change less frequently, changes are better thought-through, and people think more about what they're doing. Probably this makes economic sense, and the costs of kernel breakage are that much more than the costs of a broken UI, but definitely the kernel's design is easier to make -- because it has a design, rather than ever-shifting chaos.


Exactly, working at a low level still means that “the programmer operates as a mediator between people having different ideas”. The different people are just hardware builders instead of library writers.


But I think that even with all of that, writing an OS is still much easier than an accounting package. The way forward is paved smooth by hundreds that have gone before. And by that I mean, hundreds of Unix clones have been made and open-sourced. Things like microkernels and NT-style I/O architecture are esoteric things that you only see in the embedded world (which Google seems to be limiting Fuchsia to at the moment) and enterprise-oriented operating systems (like Windows, which home users only use out of a weird quirk of history).


A consequence of AT&T originally not being allowed to sell UNIX and the existence of the Annotated UNIX book.

Without it, UNIX would have been as esoteric as anything else and most likely would have died instead of sprung multiple clones.


Genuinely interested, do you have some pointers about what you mean by NT-style IO? How would you characterize it?


I was referring to the "David Cutler (designer of VMS & Windows NT) criticizing UNIX i/o architecture" comment above, which apparently refers to Dave Cutler's criticism of Unix as "Get a byte, get a byte, get a byte byte byte", which seems to me to refer to the Unix/Linux style of programming (in which programmers overuse things like getch() and putchar()). Whew.

But. There's another, much bigger difference between Unix and VAX/VMS/WinNT: The way I/O operations are handled in the kernel itself. In Unix systems, the kernel just passes I/O operations directly to the driver code, which immediately tries to do the operations on hardware. It's a very microcontroller-ish view of the world. WinNT treats I/O operations as sort of a "packet" which encodes the operation to be performed. That packet passes down through the layers of the kernel, including 3rd party modules and eventually drivers, until the operation is performed on hardware. Then, another packet percolates back up the stack to your app with the result.

Things like antivirus, dropbox, etc. can insert themselves into the stack and intercept/modify/reroute I/O operations and results globally.

I think that Linux probably has something like this, or you can probably cobble it together by using async I/O, io_uring, and/or eBPF.


One thing to remember about it is how UNIX's I/O model is designed for 64kB total of memory space on simplistic mid-range (at best) computer. While trying to fit in concepts from 36bit Multics machine whose I/O primitive was equivalent of mmap().


POSIX is ask first (can I do I/O?) then do, while NT is do, then ask (is it done?).


TempleOS is the one pure operating system created without any debates. RIP Terry A. Davis.


You mean, Terry won all of the debates ;)


The only OS endorsed by the Holy C.


Exactly.

Dig around in C land and you’ll find 100s of opinions, even more libs and toolkits that came and went out of fashion.

Web stacks are “computing” to industry right now. In the 80s and 90s it was OS and database.

If low-level had the old economic attention, we’d see the same activity and mess as we did in the 80s and 90s. Everyone has a compiler to tell you about, and a build system DSL that’s just right for your problem!

But big corp largely controls that and so a new layer of abstraction was added via web, and we’re seeing a shift now to less UI, and machine driven decision making that just gives us the next step in the recipe.

Frankly I don’t mind technology for navigating human society; buying stuff and literal navigation.

I still unplug with musical instruments and the forest.

The addictive applications we’ve built in technology; games, social and multimedia, got really banal for me a long time ago. If I hear things like “Itsa me, Mario” or something about Halo I recoil at the cringe.

I guess I don’t want my psychedelic noise wrapped up in branding and consumerism.


BTW it's not obvious that an OS is harder to debug.

A logical error in an accounting package can probably often only be found by a checking tool that has the equivalent of all the functionality of the package under test.

OS bugs are more likely to have the side effect of violating language invariants (which eg KASan, UBSan and KCSan often detect), or violating protocols (so you can stress-test eg a TCP implementation and detect the bug), or it can violate API invariants (so you can use some API-level stress test like stress-ng to detect the bug.)

All of these tools are much less complex than the software under test. I think the warlord of Wu made the comment about debugging before massive automated testing, compiler instrumentation etc were a thing.

And I think a similar argument can be made wrt software and hardware-level visibility/tracing mechanisms - again an OS will end up being easier than higher-level software given a good investment in tooling.


The accounting package is much easier to design if you have the same level of requirements for them. If the goal is to make a toy product then making an accounting package is trivial while a OS is not. If the goal is to make a product that will get a lot of users then making an OS is almost impossible while plenty of people sell accounting packages. Low level is only easier than high level if you don't intend to sell the low level program but do intend to sell the high level program.


Requirements for an accounting system? That’d be hilarious if I wasn’t so traumatized by past experience. Seriously, the tensor product of jurisdictions, clients, strategies, and the various preferences of your rotating bosses—it’s a cascade of requirements that you could never satisfy.


Try to write a commercial OS that can compete with IOS, Windows or Linux. I doubt it is much easier, there are so many features people expect from a modern OS.

The point is that when you compete with the best in the world in any area then it will be hard to win. Low level code is only easy when low level code isn't an important part of your product. For example backends to web apps, you don't write any low level code for them. The networking is handled via application server frameworks, the data storage is handled via pre packaged database programs so you don't even have to open a file etc. It isn't low level, it is just high level glue code between different libraries. This works just fine since you aren't trying to compete using low level code, so using off the shelf products without almost any code written by you is fine. But when your product requires you to compete using low level code then it is really hard.


That seems pretty dumb. There aren’t competing ideas about how an operating system should work?


hahah this is so good. you made my day


This applies so much do gamedev.

I’ve tried Unity, Unreal, and Godot. But I’ve never been able to produce anything, because there’s just so much to learn. Rigging, particle systems, AI pathing, etc. And so much weird bugs and workarounds. Unity in particular sometimes seems like it was hacked together, and there’s always multiple ways to do everything (DOTS, UI, animation), the broken “beta” way and the “legacy” way.

In contrast, I write my own engine, and even though I have to make everything it’s so much easier. All I need to know is graphics and code. My game engines are much more intuitive to me because I wrote them. And no workarounds, because if the engine has a bug, I can fix it at the source. Of course, this means I end up making smaller games and it’s harder to collaborate with others. But it’s better than no games at all.

With the like 20+ different game engines out there I know I’m not alone.


A good way to learn game engines is to build a series of games that only require you to learn a few new things with each iteration. The same way you wouldn’t roll your own game engine that has all the features of Godot on your first try.


Yes! I coded many games with Flash/AS3, which is regarded as low level these days, but it actually had a lot of abstraction over the rendering. For years I would battle with the opinions of the framework, usually attempting to smooth over it with my own even higher level framework.

It wasn't until I start coding directly with HTML canvas that I was finally able to produce code and workflows without the battle. I don't understand why pushing and popping to transformation stacks isn't the norm, when it is just sooo much cleaner. As soon as an underlying framework introduces an object hierarchy for rendering state, in the form of a display list or scene graph, it takes away so much of your ability to structure game state how it best makes sense. I would even say that ECS (entity, component, system) is often times an attempt to smooth over the opinionated framework problem, but it isn't all that great compared to plain and simple data structures for your game state.

I wish more people were using rendering libraries such as Kha (http://kha.tech/), building tooling on that sort of tech, and making those lower levels more robust and portable.


Do you actually finish the games? I’ve tried the DIY game engine approach before, but I never have finished a substantial game before because I get caught up in the weeds.


Beware the eternal enginedev... many a brave soul have fallen to this trap.


The way out is to focus on the game. Only build up the engine to the point the game needs it to be built. The bulk of your focus should be on game logic, even when you take writing the engine into account.

Granted, I haven't finished my game this way, but the engine isn't really the bottleneck. It's deciding on, and implementing, appropriate behaviors, often for silly, fiddly corner cases.


Exactly this. After lots of trial and error I’ve found it easier to build better engines by extracting and abstracting the common elements from my game development process than focusing solely on engine development. Really can’t describe how much my productivity has gone up since I switched my focus by adding constraints to my development process and narrowing my focus to more parochial problems instead of always pondering the general/wider abstractions. It seems that abstractions are the natural consequence of focusing on a narrow and concrete problem, but people instead try to directly jump to the wider abstract space without any bounds or constraints set, and eventually overwhelming themselves.


Developing the engine is so much more fun though.


I think this is why developing emulators is fun. Once you’re done, the games have already been made for you!


Sort of :). Most of the games are “technically” playable, but not quite ready for distribution (buggy, missing settings), and I’ve never published.

The main issue is that I make most of the game, then realize that it’s really unbalanced and the controls are clunky and it isn’t that fun. So i lose interest. Probably related, I don’t really enjoy video games anymore because I get bored too easily.


Sounds like you've finished tons of games. The games just happen to be sandbox gamedev games - i.e., the engines themselves. Release the engines!

Also, you could just go work as an engine programmer in AAA or for Unity/Unreal/etc....


> if the engine has a bug, I can fix it at the source

While this is probably going to be easier in something you wrote, note that it's possible in both Godot (F/LOSS) and Unreal (despite being proprietary they give you access to the source for free when you sign up).


Yeah it’s definitely possible but a lot harder. Working on my own code is easier than someone else’s code. Of course they would say the same.

Collaboration is a really important skill that i work hard to improve. Just not when making video games.


Yeah, I agree, although it's itself a good skill to practice.

I just wanted to surface that of the three, Unity has an additional barrier.


Everyone says not to write your own game engine but if you don't know another one it takes about the same amount of time to learn how to use it IMO.

The main justification is when you're working in a team everyone can use the same tools.


You might enjoy raylib, a library for videogame programming.

https://www.raylib.com/


This looks look great, thanks for sharing.


I have known this for many years but found it difficult to explain to others. Backends are so much easier to build compared to frontends, and I mean decent and reliable ones. The attitude of the backend people of looking down upon frontend people is really stupid and unsubstantiated. Though scalable backends are increasingly getting their own layers of complexity these days, but still, on the backend you will never deal with frameworks with millions of entry points. If you don't believe me, UIKit alone exposes I think about half a million public symbols.

Nevertheless, I chose the path of the frontend (specifically mobile) because it fascinates me when a lot of people see on their screens something that you have built. A lot more than building a system that only the internal frontend devs are going to appreciate (or not).


> Backends are so much easier to build compared to frontends, and I mean decent and reliable ones.

That's actually true. What makes backends harder (or rather slower?) to build though is the longlivety of the data involved. Backends that don't deal with data are indeed simpler than frontends in my experience.

> The attitude of the backend people of looking down upon frontend people

That has nothing to do with the complexity. It comes from the fact that frontend is more approachable and also more hip. It attracts more "amateurs" or people who don't know what they are doing, for one reason or another. The result is that technology on the backend-side is more advanced and mature than on the frontend side - though that might be a bit subjective and not apply to specific areas.

You actually give your own example: "frameworks with millions of entry points" and a dozen new ones every year. :) However, I think frontend is in the progress of catching up - e.g. Javascript is currently replaced with Typescript, which is not being looked down upon by backend people, at least not as much.


> Javascript is currently replaced with Typescript, which is not being looked down upon by backend people, at least not as much.

For some reason - unbeknownst to God and his prophets, very smart, intelligent folk want to use a BMX as farming tool to plough and till the soil. Now we aren't going to ask silly questions like - why use a bmx when you have a whole barn full of tractors.

No.

We are going to ask how to get maximum output from the pedal mechanism. How to replace the human with a robot so we plough faster and how best to install solar panels for day time power source as well as highly specialized lithium ion cells made specifically for this purpose, so we can cover more ground.

A farmer, who has been farming for his whole life, really belived the tractors with billions of man hours in commercial farms, trusty, easy to fix and operate with a plethora of mechanics to fix and improve them.

But that just gives the farmer a vague existential crisis, what really gets him in the dumps is how they get the the goods to and from the farm. See he has big lumbering trailers and a few modernis rigs. Hes been using them forever, they are insured and also have possibly the highest reliability of transport. The new farmers merely smirk at the diesel autos. They have sonething better, smaller, faster and many of it. Quadbikes hooked to trailers. Hundreds and thousands of them. Apparently a central computer, bless it, controls and orchestrates the many quad bikes. The old farmer feels lost and helpless everytime he goes out to harvest. His trusty harvester looks impossibly old and slow, steady as she goes. No matter that she can haul a few hundred tons a week, it is nowhere new sophistacted than the automated quadbikes - zipping in and out with their 80kg loads at break neck speed. It's like watching a symphony.

He wonders what they actually grow there. Seeing such advanced alien technology ,to him at least, these new farmers must be planting something more delicate than rice, yield higher than wheat, and sturdier than simple corn. Surely, he reasons, his life is all but over.

But is it?

The FOMO chronicles, Afseyuk, 2020


Something I've noticed is that when you're building a backend, your program only ever interfaces with other programs. You can establish a reasonably predictable protocol. But when you build a front-end, you're interfacing with the human brain and all the messiness that that entails. It's an incredibly difficult thing to do well.


This is exactly why frontends are orders of magnitudes more difficult to build if equal quality standards are applied to both back and front ends.

A front end is a state machine that relies on a bunch of enormous framework layers. Give me an app, mobile or web, and I'll find a glitch or two in the first few minutes of using it.

Backends on the other hand, like you said, only ever talk to other programs. And that means building a 100% correct and bug free backend system is not such a fantastical goal.


>Give me an app, mobile or web, and I'll find a glitch or two in the first few minutes of using it

I find that a very strong claim, is it rhetorical or meant literally? If the latter, what can you find in (say) whatsapp* ?

(*: aside from messages delaying or appearing out-of-write-order, which are probably backend bugs inevitably arising from its distributed nature)


I deleted WhatsApp a while ago so can't say right now. It is a pretty mature app that hasn't added big features in years I think, so I wouldn't expect a lot of bugs in it, but take a look at their version history on App Store, there are plenty of purely bug fix releases only this year.


I think the biggest difficulty in front ends is that they are statefull. Backends are usually stateless and pass through data from a data store or a cache.


It's the opposite. Frontends are statefull (backends are often also), but frontends don't deal with long living data. Everytime the app/site is reloaded/restarted, it starts with a fresh state. Backend however often has accumulated a lot of data tech debt over the time.


The difficulties of backends usually come from one of these areas that I don't think you're giving proper credit:

- scale - a frontend shows one user's view at once, a backend may need to be updating state for millions of users without slowing anyone's experience down. But these sorts of things are somewhat different than just "backend code", it's often closely tied to infra and data designs, which doesn't necessarily look like "backend coding" but is critical to make the rest work.

- business logic complexity - often the backend teams are tasked with working with frontend to figure out a simple, user-friendly API surface for the app and also to figure out how to make that work with the various not-written-down, not-even-fully-enumerated business logic permutations and backend-data permutations that could be going on. If they do their job right, the frontend can show different screens with values from the API ready to plug in for a particular scenario without having to worry about the combinatorial rules behind the scenes that led us to want to show the user THIS screen instead of any other one. The difficulty there isn't a mathematical or "hard" one, it's that the product team probably has no idea just how gnarly the set of business rules they sketched out actually gets. A backend has to support everything the frontend client can do today, but often ALSO support a bunch of things the public frontend apps don't yet do (whether this is for admin tasks, or for future readiness, or what).

I'm not sure I see the significance of the number of symbols in UIKit - you usually don't need all of them, I assume, and I know that there are some common patterns that get re-used a lot. Is it so different from in backend when there are dozens of databases you could choose, and a bunch of frameworks for each of a bunch of different languages you could choose, but for most problems it's not going to make a huge difference?

One thing that often hides the difficulty on backend is that the frontend team might be interacting with just one or two backend teams that then interact with a different set of teams behind them, etc. But the true "backend vs frontend" question has to include ALL of those teams, even the ones you don't directly interact with, since they're all in the critical path.

"Hard frontend problems are harder than easy backend problems" - ok, sure. But that's not a very interesting comparison.


Someone else has already mentioned in this thread that backends only ever interact with other programs, not humans. This fact alone makes it easier to design, implement and scale. And like I said elsewhere it is in fact so much easier that a backend can be built to be 100% correct and bug free, while for any more or less interesting and useful frontend it is practically impossible to achieve.


That's nonsensical. User input is passed to backends all the time, and results in just as many difficulties there. From SQL injection and other "untrusted input" problems, to dealing with migrating five years worth of user data to a new system because the old backend can't keep up, to trying to keep every piece of data in a consistent state.

Look at CAP theorem: it literally says "you can't have a perfect system here, you have to choose." That's a backend problem with literally no "100% correct and bug free" answer!

(With regard to the original article here, your error is that you're assuming modern backend software is closer to "low level" than "high level" compared to frontend code. It really isn't. And in terms of running your code, it's often at a HIGHER level than an app running directly on a user's device - you're in containers on VMs on someone else's hardware in a cloud environment, say.)


I think the CAP theorem is a bit overstated. It’s not, “You can’t have a perfect system,” but “You can’t have a system which is both available and consistent during a network partition.” True, and interesting, and useful to know, but if “100% available” is in your requirements to begin with, you are doomed to fail and the CAP theorem is just more nails in the coffin.

To me, the “you can’t have a perfect system” is really just, “You can’t have 100% availability, and you can’t have 100% durability either.” The hard part is then convincing the users of your system that even though your system seems perfect, they should be prepared for it to fail.

IMO the backend API solution is somewhat easy. 503 Service Unavailable. Then the front-end needs to somehow present a meaningful error message to the user, with a plan for how the user can accomplish their desired action or at least save their state and try continuing later… which can get insanely complicated…


If your backend is frequently throwing 500 errors and "convince the users of your system that this is OK" flies with your management, sure, backend is easy!

But that's often not the requirement... 100% available isn't the requirement per se either, but "as available as possible because outages cost us $XXXX/minute" is. And pushing that number up is HARD.

And remember the context of this thread: we have a claim here that backend IS easy and that it CAN be "100% correct and bug free." Saying this allows for random-ass 500 errors whenever stuff is broken is such a huge copout as to make that original claim meaningless.


> If your backend is frequently throwing 500 errors and "convince the users of your system that this is OK" flies with your management, sure, backend is easy!

Strawman argument. I think you’re responding to something I didn’t write. The argument is kind of absurd.

> …but "as available as possible because outages cost us $XXXX/minute" is.

“As available as possible” is not a reasonable requirement. If this is your requirement, you’re in deep shit, but for different reasons. The problem is that you can generally spend more money to get more uptime. At some point, you have to say that the cost of more uptime is too dear.

You can do experiments, testing, and simulation to get a more quantitative handle on the trade-off between uptime and resources spent. If you care that much, that is. But “as available as possible” is a bad requirement because there’s no way to tell if it’s satisfied.

What ends up happening is that people unsatisfied with uptime can blame you for not having enough uptime, but they aren’t telling you how much is enough. The purpose of having a requirement is to record an agreement between different teams about “how much uptime is enough uptime”. It can be the wrong number, and it can be revisited, but if it’s not a number (with a way to measure it), you won’t be able to calculate other numbers like “how many database replicas do I need?”

> And remember the context of this thread…

I’m responding to the CAP theorem bit. You said that the CAP theorem means, “you can't have a perfect system here, you have to choose”, and I think that this is misstating the CAP theorem.

> Saying this allows for random-ass 500 errors whenever stuff is broken is such a huge copout as to make that original claim meaningless.

503 errors aren’t random. They happen when the backend is unavailable. It’s not a random “stuff is broken” error—that’s 500.

This is not a cop-out. Services will sometimes be unavailable. The 503 status is the correct way to signal this to the front-end. The fact that the back-end returns a 5xx series error code does not mean that there is a bug in the back-end.

Likewise, if the service is important enough, your front-end should be able to respond to a 503 error with some kind of reasonable workflow for the user.


Backends also need to design/build for their operators.


A lot of the times the part of the backend that renders pages and handles user input can be assumed to be part of the frontend, I'm not talking about those. I meant backend as an API.

I honestly think you've never built a decent sized, non-trivial mobile app for example, and you probably have no idea how tedious and difficult it is to produce something that will be pleasant for the users to deal with.


I mean, if you say "parts of the backend are actually the frontend"[0] and "parts of the backend area actually the infra" and say "backend" is just a CRUD api that talks to a database: sure, that can be easy. But I've rarely seen backend jobs that don't require touching the 90% of the iceberg that's underwater...

So it's a stupid definition. I haven't built a large mobile app, but I'm not claiming frontend dev is easy.

You're claiming backend dev is easy but your argument so far reduces to "it's easy because the hard parts actually aren't part of it" which is tautological and pointless. You're not even just claiming it's easy, you're claiming it can be built 100% correct and bug free.

What's the most complicated backend system you've built? What database, what user scale, what traffic scale, what technologies, what servers, etc?

[0] the part of the backend that renders pages, if one exists, and the part that validates user input should be VERY VERY different components, anyway...


Hundreds of thousands of daily active users, I built both the backend and the mobile app. I think I'm familiar with common problems on both sides. Trust me, dealing with UI platforms is orders of magnitude more pain in the arse if you set the quality standards to the highest. Even the codebases were approx. 1 to 20-25.


A big difference still stays: a frontend (and that includes SPAs, mobile apps and desktop applications) is essentially restarted all the time.

If I screw something up in the frontend, then in most cases, I can fix it and people use the new version / reload the page and that's it. In backend, if I screw up, in most cases I now have a data problem and an ongoing problem that might only be fixable through an operational update. I might even have lost/overwritten data that is not recoverable or only through backups.

Sure, this can happen on the frontend too (e.g. sending a wrong update command to the backend API or when storing some data/configuration directly in the browser) but from my experience it is much much more likely to happened in the backend and if it does, it is more severe.

This doesn't make things more difficult on the technical side per se, but it slows things down a lot. I can't count how often I wished to be able to just truncate all user data and start with a fresh plate, instead of with a long, messed up user data history that contains quirks, fixes/workarounds in the data and that I have to stay compatible with and not forget.


A backend dev may argue that the frontend is complicated _unnecessarily_. You don't need your bloated, complicated frameworks. In most cases, website would be better for the end user if you stopped trying to over-design the frontend.

Frontend is hard so we use lots of frameworks which makes frontend hard so we use some more frameworks.....


> The attitude of the backend people of looking down upon frontend people is really stupid and unsubstantiated.

Agreed, personally when I've heard this, it's frequently due to the "backend" devs really struggling with the complexity of user interface needs, especially UI state. This is the root of a lot the "JavaScript is such a trainwreck" type comments too.


One problem may be that building specific units of product functionality is considered junior-level work. Doing so quickly and with high quality can get you a nice $20,000 bonus, but not the $200,000 RSU bump associated with promotion to the next level.

A higher level engineer is meant to "scale her impact" by owning the architecture, the platform, the playground in which other engineers live. In backend world there are opportunities to do this in nearly any business domain or sufficiently complex problem. But a company only needs so many UI frameworks. As such we see very good, long tenured mobile and frontend engineers simply cranking out feature after feature. There's no career progression that way.


>The attitude of the backend people of looking down upon frontend people is really stupid and unsubstantiated.

I think the high horse is because frontend dev is hard. The backend has more methodical tooling. Javascript is not considered a clean language even if it has improved.

It would be nice if you could use clean modern tools but the web is forced to support legacy browsers so it is what it is.

I personally like the full stack and don't look down in either direction, but I understand the sentiment.


I don't think backend people deny frontend is hard. The problem is frontend is hard for all the boring reasons.


The most annoying thing about frontend is the shifting sands. Not only is it complex but APIs, browsers / windowing systems, OSes, and UI/UX fads are constantly shifting. You often have to rewrite the damn thing every few years or it looks "old," and when people see apps that look "old" they assume they're not maintained or are of low quality.


>If you don't believe me, UIKit alone exposes I think about half a million public symbols.

That's interesting, do you have a source for this?

For reference the Qt5 framework only has about 25k publicly documented functions and the shared libs expose ~100k symbols.


I think that passes the smell test. They're at the same order of magnitude and UIKit includes a ton of optional functionality that is situationally used where Qt would likely pull in a 3rd party lib.


Honestly I don't remember but someone counted all the public symbols which included constants too.


> Backends are so much easier to build compared to frontends, and I mean decent and reliable ones.

Backend:

- running a migration on a table of 100 million rows... in MySQL 5.6. Well, things can get complicated.

- building the infrastructure and monitoring of a k8s cluster

- debugging concurrency bugs

I think these topics are not really easier than their frontend counterparts. I do believe some aspects of frontend development are harder than some of backend development (and the other way around). I do not think, in general, that "backends are so much easier to build compared to frontends" though.


> Backends are so much easier to build compared to frontends, and I mean decent and reliable ones.

Evidence would suggest otherwise. I can build a decent and reliable frontend with a no code solution. That's not to say frontend is "easy", just that trying to say which is harder is ridiculous. They're different.

I personally don't like working on frontends because it feels like getting on a hamster wheel that spins faster than in any other domain. ymmv though.


A backend can just say ‘no’ if it gets a number in a format it doesn’t like.

If a user enters a number incorrectly on a frontend, a frontend can’t just say no. It might have to try its best to parse the number, or if it can’t, the error has to be presented in a user friendly way. A frontend can’t just throw an exception and say Bad Request.


> If a user enters a number incorrectly on a frontend, a frontend can’t just say no.

Followed by the distant sound of enterprise, banking, and government frontend devs laughing.


I disagree. Have you ever worked in finance? Inconsistent or unexpected number/date formats have to be dealt with. And to add to it, the formats can be binary and not just text that you can look at it and sort of "figure it out". Often times these files are coming from other large institutions and getting them to fix their data takes days if ever. In the mean time there is daily processing that has to get done in order to trade at the next day's opening.

A system can't just reject data it doesn't like.


Or you just send the input to the backend and have the backend tell the frontend to tell the user "backend said no"


> I can build a decent and reliable frontend with a no code solution

Things that can be built that way are not interesting. Of course you can just assemble online stores without writing a single line of code, but that's not exactly programming. We are talking about frontends that have some sort of novelty or at least non-standard flows in them.


The parent made no such distinction. And your comment applies to just as much to backends as frontends. If a backend is "easy" to make reliable then it's probably not very interesting.


Seems like you don't know much about building frontends.


Sounds like you don't know much about building backends.


It has been easy over the years:

Low-level is easy (2008) - https://news.ycombinator.com/item?id=20568736 - July 2019 (26 comments)

Low-level is easy - https://news.ycombinator.com/item?id=7538150 - April 2014 (56 comments)


Anyone who becomes sufficiently experienced in a domain will inevitably learn and take advantage of their underlying platform's characteristics.

A web developer who wants an app to load and render quickly will take advantage of the browser's characteristics. For example, loading compressed assets in the right order to get a short time-to-first-paint, updating styles without reflowing the page, using requestAnimationFrame to render animations at a consistent frame rate and play nicely with the event loop, etc.

Likewise, a systems programmer ends up learning the quirks of their OS and hardware. For example, loop tiling to take advantage of cache locality, reducing data dependencies to reduce pipeline hazards, using splice() to copy data on Linux to avoid context switching, etc.

It's all just programming. Even if the distance from hardware is different, the concepts are consistent. Historically, hardware has been under-documented, which I think led to the community glorifying low-level programming. But under-documented and buggy platforms are essentially universal nowadays, so I think the low-level vs high-level distinction is not very useful.


I recently switched from being a more general "software" dev to an embedded systems dev. Despite the fact that I'm using mostly C, I find everything much easier in general.

There are so many less layers of abstraction and "magic" to dig through that at I can always figure out what's going on easily enough. I think this article hits the nail on the head.


Staying as far from business as possible while still keep a thin line of contact with it, IMHO is the best way to live a programmer's life. Even when programming low-level systems such as Operating System or Compilers one still keeps a thin line of though of real world business through communicating with your users.


I wonder how much of this is due to the fact that the programming education today comes from a top-down approach, where one starts with a high-level language with little knowledge of the underlying system or the layers of abstraction on top of which it runs. Your upper year courses is where you begin learning more about operating systems, computer architecture, and the like.

Naturally, those programmers become focused on the high-level content because that's what they're brought up with/am comfortable with.

It would be interesting to see courses being taught from a bottom-up approach, maybe not at the gate/transistor level but rather at the assembly level. Teach the basic instructions, move on to C, then move on the high level languages - I actually learned programming this way and found that it helped a lot in understanding why and how things happen the way they do.


It's been plugged a bunch of times on HN, but "The Elements of Computing Systems" really helped me with this as a self-taught dev.


I used to be naive and think frontend was just pushing pixels around and messing with html. Now that I like to think I know better.

I think people generally perceive low-level/backend work as more objective, scientific, and rigorous.


Currently learning embedded development. There’s a kind of precision needed in this space that wasn’t important in higher level mostly frontend work. I need to know that the loop executed exactly twice since the last update, or whatever. But there’s also a ton of stuff I don’t need to worry about. Input sanitization, animation curves, view layering, network request invalidation, stuff like that. I think the premise is wrong. There is no easy or hard domain. It’s all about the specific thing you’re doing within that domain


I guess I'm just an idiot ¯\_(ツ)_/¯

I guess let's define what "low-level" is because I see a lot of abstractions in low level stuff. A couple years ago I had a sudden interest in the x86 platform. That entire architecture just seems like a piling of new abstractions on top of legacy stuff. And the hardware platform it typically runs on seems impossible to understand without being an insider. I remember trying to fully understand the entire UEFI boot process, and all the stuff below it and other modern components. The docs we're horrendous, and at times seemed non-existent for a lot of stuff and it was just too much information to try to take in.

I don't think I'll ever come around on the tooling. C is just a bit too barebones for me and I'm not fond of the "portable assembler" aspect of a lot of behavior being offloaded tot eh platform, which I now have to know in depth. I can read it, and write a little bit, but I don' think I could ever become good at it or enjoy it. C++ seems immeasurably complicated and i recall reading the phrase "I've been learning C++ for 10 years" the other day. There are some languages I don't mind as much. I like Ada, but no one uses it anymore. I'm working on what I suppose you could consider some systems software at the moment, and have resorted to using higher level languages that provide features or abstractions that provide exactly what I need (notably Erlang bitstrings) and only dropping to C when I need it.

I've tried other things too: driver development on both Windows, Linux, and BSDs (when the docs are decent, this is the thing I've tried that makes the most sense), FPGA stuff (For some reason, I can never wrap my head around digital logic, It makes sense at a very abstract level, but combing components into useful designs is just so foreign to me)


The last few months I have developed a web app. The shit you have to go through to make basic stuff work is insane. I would get depressed if this was my full time job. How browser can be one of the most important technologies and still suck so hard is truly mind boggling.


Have you tried Go? I do my web apps in Go and all I need in my computer is the `go` binary installed. Nothing else. I use plain JS, CSS and HTML; which seems enough if combined with the Go templating system.


It's not the backend that's the problem. That's trivial. But browser are just so annoying. HTML + CSS looks slightly different on all browsers. Web API that don't work in all browsers. Basic stuff like file saving not being available. And don't get me started on CSS. I have never met anyone that truly understands CSS. It's all just throwing shit at the wall until it sticks.


> The shit you have to go through to make basic stuff work is insane.

Isn't that always the case when you are jumping on a new area?


It is almost as if writing the low level code required to make a browser run well is hard...


How much of that hardness is necessary, though? Backward compatibility is quite costly.


Shh, don’t tell people this - we’re trying to keep the other programmers out. ;)

In all seriousness embedded feels like the last safe place for a programmer like me, who grew up in the [redacted ancient decade] and has watched with befuddlement the tumescent froth of abstractions in other programming disciplines. I just can’t do web programming, for example. I did for many years but it broke my spirit and made me really dislike programming.

The “aha” moment was when I realized I like programming computers. Not browsers, not VMs, but actual computers. When I think about that old quote about computer science and telescopes, well... it turns out I really do like telescopes.


Any tips for starting in embedded? I don't necessarily want to switch paths, just tinker with something at home. But something realistic that a modern embedded engineer would work on, not say making a light blink with an Arduino.


My first stab at prototyping was with an STM32 discovery kit, a couple of breadboards, and a bag of random parts. The best advice I can you give you though is start with an idea. Do you want to do sensing, audio, LED art, a tiny game console, a robot? That’ll determine the kinds of hardware you should get and will direct your learning. I can also recommend Making Embedded Systems by Elecia White.


Every programming job is hard enough that the average programmer doing it will struggle, since it is those you will be compared against. So there is no hard or easy, you just struggle with different things. If you have a comparative advantage doing low level tasks then that will be easier for you, if you have a comparative advantage doing high level tasks then those will be easier.

Of course if a field is very popular compared to demand that they can be more selective then those jobs will be harder, since you are compared against better people. Game development is one such area.


I took a compiler construction course this year, and successfully passed it. We implemented an imperative/functional hybrid, with dozens of the implied (the Prof had warned us) swearing/cursing/regretting/all-nightering along the way. The employed toolchain comprised flex, bison, and LLVM, all wrapped with C++ and running on x86_64.

I still don't feel any confidence around user-space code. Lots of stuff was abstracted away--especially during the code generation part. Before taking this course, I had imagined that every component of my compiler would be hand-crafted by yours truly. Note that I do not expect an undergrad course to demand such levels of skill.

The thing I want to say, I guess, is this gut feeling that there are projects more appropriate than a compiler to draw your first userLand blood. Like a memory allocator, for instance; in my opinion, systems programming carries far all-arounder knowledge on its shoulders, with knowledge meaning a reliable representation of what takes place under the hood of a modern computer, in a comfy level of abstraction (program, threads, data, memory hierarchy, user, kernel, etc).

I am afraid, however, that I'm just too amateur to actually judge, and that I'm projecting a considerable amount of CS-revolving insecurities on the above statements. Any help with clarifying that is deeply appreciated.


>flex, bison, and LLVM

Don't feel bad for wiring up ready components, the common wisdom (which I imbibed by reading lots of tutorials and half-reading random interesting sections of compiler books and telling myself this is learning) in compiler land is that the things that happen at the very beginning and the very last are not the core of the problem and abstracting them out is safe and even preferable for a language designer to do.

The "Parsers are uninteresting" meme is off course very widespread (often unfairly, parsers and parsing algorithms can be a gorgeous rabbit hole), and I feel that a corresponding "Native code generation is uninteresting" meme is beginning to take hold. Despite much rote repetition, they are mostly right. Parsers and code generation have two things in common : 1- They are very well formalized, with sophisticated algorithms that could take a declarative notation and generate what it describes. (grammars for parsers, 3-address codes and other virtual assembly languages for code generators).

2- They are relatively stable, there are only a few major families of syntax styles, and only a few major mainstream architectures.

So you have a complex task, but once done it's relatively stable and static. It's algorithms and architectures are very complex and error-prone, but they could be painted over with well defined specifications that other tools can consume and do the heavy lifting for you. Sounds like the perfect storm for automation.

In contrast, lots of problems in language design are those quick shifting sands of ergonomics and other problems where the tradeoff landscape is so large and varied there is no obvious "one way to do it well". Packaging and importing libraries, cheap editor and IDE integration, how to make human-friendly also machine-friendly, etc....

This is not to say that parsers or code generators are "Solved", only that they are just slightly above the nastiest chess or alphaGo game, sometimes manageable, sometimes ferociously difficult, at other times downright impossible. Always interesting and worth exploring, but never ambiguous or murky.

Tooling, semantics and other design/social problems are more like how to best raise your kids or how to make a just and lawful society, sometimes people can't even agree on what seems to be the problem.

So, naturally, the constant formal parts of the compiler got formalized, was hidden behind declarative specifications, and further exploration and implementation was relegated to academia and R&D departments. The social aspects and murky design problems are what drives people designing new languages. (for purposes other than learning)


I feel your pain. I recommend reading the absolutely wonderful book "Crafting Interpreters" by Bob Nystrom. For me, it demystified a lot of what you mentioned.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: