New Blog
I relocated my blog to Hugo due to easier maintainance and more control over content and layout. You can find it here.
All articles from this blog have been preserved, although I won’t list some that I found lacking in quality.
I relocated my blog to Hugo due to easier maintainance and more control over content and layout. You can find it here.
All articles from this blog have been preserved, although I won’t list some that I found lacking in quality.

Combining a deep-depthwise CNN architecture with variable quantization in BitNetMCU achieves state-of-the-art MNIST accuracy on a low-end 32-bit microcontroller with 4 kB RAM and 16 kB flash.
Read the article at my new blog location.
This’ll be the last Recently in 2025. It’s been a decent year for me, a pretty rough year for the rest of the world. I hope, for everyone, that 2026 sees the reversal of some of the current trends.
This video from Daniel Yang, who makes spectacular bikes of his own, covers a lot of the economics of being a bike-builder, which are all pretty rough. I felt a lot of resonance with Whit: when I ran a business I always felt like it was tough to be commercial about it, and had to fight my own instincts to over engineer parts of it. It’s also heartbreaking to think about how many jobs are so straightforwardly good for the maker and good for the buyer but economically unviable because of the world we live in. I feel like the world would be a lot different if the cost of living was lower.
Yes, the fan video! It’s a solid 48 minutes of learning how fans work. I finally watched it. Man, if every company did their advertising this way it would be so fun. I learned a lot watching this.
I have trouble finding videos about how things work that are actually about how things work. Titles like “how it’s made” or “how it works” or “how we did it” perform well in A/B tests and SEO so they get used for media that actually doesn’t explain how it’s made and how it works, greatly frustrating people like me. But the fan video delivers.
But then, you realize that the goal post has shifted. As the tech industry has become dramatically more navigable, YC became much less focused on making the world understandable, revolving, instead, around feeding consensus. “Give the ecosystem what they want.”
I have extremely mixed feelings about Build What’s Fundable, this article from Kyle Harrison. Some of it I think is bravely truth-telling in an industry that usually doesn’t do public infighting - Harrison is a General Partner at a big VC firm, and he’s critiquing a lot of firms directly on matters both financial and ethical.
But on the other hand, there’s this section about “Breaking the Normative Chains”:
When you look at successful contrarian examples, many of them have been built by existing billionaires (Tesla, SpaceX, Palantir, Anduril). The lesson from that, I think, isn’t “be a billionaire first then you can have independent thoughts.” It is, instead, to reflect on what other characteristics often lead to those outcomes. And, in my opinion, the other commonality that a lot of those companies have is that they’re led by ideological purists. People that believe in a mission.
And then, in the next section he pulls up an example of a portfolio company that encapsulates the idea of true believers, and he names Base, which has a job ad saying “Don’t tell your grandkids all you did was B2B SaaS.”
Now, Base’s mission is cool: they’re doing power storage at scale. I like the website. But I have to vent here that the founder is Zach Dell. Michael Dell’s son. Of Dell Computer, and a 151 billion dollar fortune.
I just think that if we’re going to talk about how the lesson isn’t that you should be a billionaire first before having independent thoughts and building a big tech company, it should be easy to find someone who is not the son of the 10th wealthiest person in the world to prove that point. I have nothing against Zach in particular: he is probably a talented person. But in the random distribution of talented, hardworking people, very few of them are going to be the son of the 10th wealthiest person in the world.
Like so many other bits of Times coverage, the whole of the piece is structured as an orchestrated encounter. Some people say this; however, others say this. It’s so offhand you can think you’re gazing through a pane of glass. Only when you stand a little closer, or when circumstances make you a little less blinkered, do you notice the fact which then becomes blinding and finally crazymaking, which is just that there is zero, less than zero, stress put on the relation between those two “sides,” or their histories, or their sponsors, or their relative evidentiary authority, or any of it.
I love this article on maybe don’t talk to the New York Times about Zohran Mamdani. It describes the way in which the paper launders its biases, which overlaps with one of my favorite rules from Wikipedia editing about weasel words.
I don’t want you to hate this guy. Yes, he actively promotes poisonous rhetoric – ignore that for now. This is about you. Reflect on all your setbacks, your unmet potential, and the raw unfairness of it all. It sucks, and you mustn’t let that bitterness engulf you. You can forgive history itself; you can practice gratitude towards an unjust world. You need no credentials, nor awards, nor secrets, nor skills to do so. You are allowed to like yourself.
Taylor Troesh on IQ is exactly what I needed that day.
The React team knows this makes React complicated. But the bet is clear: React falls on the sword of complexity so developers don’t have to. That’s admirable, but it asks developers to trust React’s invisible machinery more than ever.
React and Remix Choose Different Futures is perfect tech writing: it unpacks the story and philosophy behind a technical decision without cramming it into a right-versus-wrong framework.
When you consider quitting, try to find a different scoreboard. Score yourself on something else: on how many times you dust yourself off and get up, or how much incremental progress you make. Almost always, in your business or life, there are things you can make daily progress on that can make you feel like you’re still winning. Start compounding.
“Why a language? Because I believe that the core of computing is not based on operating system or processor technologies but on language capability. Language is both a tool of thought and a means of communication. Just as our minds are shaped by human language, so are operating systems shaped by programming languages. We implement what we can express. If it cannot be expressed, it will not be implemented.” – Carl Sassenrath
Alexis Sellier, whose work and aesthetic I’ve admired since the early days of Node.js, is working on a new operating system. A real new operating system, like playb.it or SerenityOS (bad politics warning). I’m totally into it: we need more from-scratch efforts like this!
Yes, the funds available for any good cause are scarce, but that’s not because of some natural law, some implacable truth about human society. It’s because oligarchic power has waged war on benign state spending, leading to the destruction of USAID and drastic cuts to the aid budgets of other countries, including the UK. Austerity is a political choice. The decision to impose it is driven by governments bowing to the wishes of the ultra-rich.
The Guardian on Bill Gates is a good read. I’ve had The Bill Gates Problem on my reading list for a long time. Maybe it’s next after I finish The Fort Bragg Cartel.
Contrast this with the rhetorical shock and awe campaign that has been waged by technology companies for the last fifteen years championing the notion of ephemerality.
Implicit, but unspoken, in this worldview is the idea of transience leading to an understanding of a world awash in ephemeral moments that, if not seized on and immediately capitalized to maximum effect, will be forever lost to the mists of time and people’s distracted lifestyles.
Another incredible article by Aaron Straup Cope about AI, the web, ephemerality, and culture. (via Perfect Sentences)
Also, no exact quote, but I’ve been subscribed to Roma’s Unpolished Posts and they have been pretty incredible: mostly technical articles about CSS, which have been ‘new to me’ almost every day, and the author is producing them once a day. Feels like a cheat code to absorb so much new information so quickly.
I didn’t add any major new albums to my collection this month. I did see Tortoise play a show, which was something I never expected to do. So in lieu of new albums, here’s a theme.
There are a bunch of songs in my library that use a meter change as a way to add or resolve tension. I’m not a big fan of key changes but I love a good rhythm or production shift.
First off: Kissing the Beehive. Yes, it’s nearly 11 minutes long. Magically feeling “off kilter” and “in the pocket” at the same time. I’m no drummer but I think the first part is something like three measures of 4/4 and one of 6/4. But then at 3:26, brand new connected song, and by the time we get to 7 minutes in, we’re in beautiful breezy easy 4/4!
An Andrew Bird classic: about a minute of smooth 4/4, and then over to 7/4 in the second half or so.
I adore Akron/Family’s Running, Returning. Starts in classic 5/4, then transitions to 4/4, then 6/8. For me it all feels very cohesive. Notably, the band is not from Akron Ohio but formed in Williamsburg and were from other East Coast places. If you’re looking for the band from Akron, it’s The Black Keys.
Slow Mass’s Schemes might be my song of the year. Everything they write just sounds so cool. The switchup happens around 2:40 when the vocals move to 6/4. Astounding.
Back in January, I made some predictions about 2025. Let’s see how they turned out!
1: The web becomes adversarial to AI
I am marking this one as a absolute win: more and more websites are using Anubis, which was released in March, to block LLM scrapers. Cloudflare is rolling out more LLM bot protections. I at Val Town have started to turn on those protections to keep LLM bots from eating up all of our bandwidth and CPU. The LLM bots are being assholes and everyone hates them.
2: Copyright nihilism breeds a return to physical-only media
This was at most going to be a moderate win because physical-only media will be niche, but I think there are good signs that this is right. The Internet Phone Book, in which this site is featured, started publishing this year. Gen Z seems to be buying more vinyl and printing out more photos.
3: American tech companies will pull out of Europe because they want to do acquisitions
Middling at best: there are threats and there is speculation, but nothing major to report.
4: The tech industry’s ‘DEI backlash’ will run up against reality
Ugh, probably the opposite has happened. Andreesen Horowitz shut down their fund that focused on women, minorities, and people underepresented in VC funding. We’ll know more about startups themselves when Carta releases their annual report, which looked pretty bad last year.
5: Local-first will have a breakthrough moment
Sadly, no. Lots and lots of promising projects, but the ecosystem really struggles to produce something production-ready that offers good tradeoffs. Tanstack DB might be the next contender.
6: Local, small AI models will be a big deal
Not yet. Big honkin’ models are still grabbing most of the headlines. LLMs still really thrive at accomplishing vague tasks that are achievable with a wide range of acceptance criteria, like chatbots, and are pretty middling at tasks that require specific strict, quantifiable outputs.
For my mini predictions:
It’s the end of 2025, which means that I’m closing in on three years at Val Town. I haven’t written much about the company or what it’s really been like. The real story of companies is usually told well after years after the dust has settled. Founders usually tell a heroic story of success while they’re building.
Reading startup news really warps your perspective, especially when you’re building a startup yourself. Everyone else is getting fabulously rich! It makes me less eager to write about anything.
But I’m incurably honest and like working with people who are too. Steve, the first founder of Val Town (I joined shortly after as cofounder/CTO) is a shining example of this. He is a master of saying the truth in situations when other people are afraid to. I’ve seen it defuse tension and clear paths. It’s a big part of ‘the culture’ of the company.
So here’s some of the story so far.
Here’s what the Val Town interface looked like fairly early on:

When I initially joined, we had a prototype and a bit of hype. The interface was heavily inspired by Twitter - every time that you ran code, it would save a new ‘val’ and add it to an infinite-scrolling list.
Steve and Dan had really noticed the utter exhaustion in the world of JavaScript: runaway complexity. A lot of frameworks and infrastructure was designed for huge enterprises and was really, really bad at scaling down. Just writing a little server that does one thing should be easy, but if you do it with AWS and modern frameworks, it can be a mess of connected services and boilerplate.
Val Town scaled down to 1 + 1. You could type 1 + 1 in the text field and get 2. That’s the way it should work.
It was a breath of fresh air. And a bunch of people who encountered it even in this prototype-phase state were inspired and engaged.

One of the pivotal moments of this stage was creating this graphic for our marketing site: the arrows graphic. It really just tied it all together: look how much power there was in this little val! And no boilerplate either. Where there otherwise is a big ritual of making something public or connecting an email API, there’s just a toggle and a few lines of code.
I kind of call this early stage, for me, the era of delivering on existing expectations and promises. The core cool idea of the product was there, but it was extremely easy to break.
Security was one of the top priorities. We weren’t going to be a SOC2 certified bank-grade platform, but we also couldn’t stay where we were. Basically, it was trivially easy to hack: we were using the vm2 NPM module to run user code. I appreciate that vm2 exists, but it really, truly, is a trap. There are so many ways to get out of its sandbox and access other people’s code and data. We had a series of embarrassing security vulnerabilities.
For example: we supported web handlers so you could easily implement a little server endpoint, and the API for this was based on express, the Node.js server framework. You got a request object and response object, from express, and in this case they were literally the same as our server’s objects. Unfortunately, there’s a method response.download(path: string) which sends an arbitrary file from your server to the internet. You can see how this one ends: not ideal.
So, we had to deliver on a basic level of security. Thankfully, in the way that it sometimes does, the road rose to meet us. The right technology appeared just in time: Deno. Deno’s sandboxing made it possible to run people’s code securely without having to build a mess of Kubernetes and Docker sandbox optimizations. It delivered being secure, fast, and simple to implement: we haven’t identified a single security bug caused by Deno.
That said, the context around JavaScript runtimes has been tough. Node.js is still dominant and Bun has attracted most of the attention as an alternative, with Deno in a distant third place, vibes-wise. The three are frustratingly incompatible - Bun keeps adding built-ins like an S3 client which would have seemed unthinkable in the recent past. Node added an SQLite client in 22. Contrary to what I hoped in 2022, JavaScript has gotten more splintered and inconsistent as an ecosystem.
Stability was the other problem. The application was going down constantly for a number of reasons, but most of all was the database, which was Supabase. I wrote about switching away from Supabase, which they responded to in a pretty classy way, and I think they’ve since improved. But Render has been a huge step up in maintainability and maturity for how we host Val Town.
Adding Max was a big advance in our devops-chops too: he was not only able to but excited to work on the hard server capacity and performance problems. We quietly made a bunch of big improvements like allowing vals to stay alive after serving requests - before that, every run was a cold start.

Townie, the Val Town chatbot, in early 2024
Believe it or not, but in early 2023, there were startups that didn’t say “AI” on the front page of their marketing websites. The last few years have been a dizzying shift in priorities and vibes, which I have had mixed feelings about that I’ve written about a lot.
At some point it became imperative to figure out what Val Town was supposed to do about all that. Writing code is undeniably one of the sweet spots of what LLMs can do, and over the last few years the fastest-growing most hyped startups have emerged from that ability.
This is where JP Posma comes in. He was Steve’s cofounder at a previous startup, Zaplib, and was our ‘summer intern’ - the quotes because he’s hilariously overqualified for that title. He injected some AI-abilities into Val Town, both RAG-powered search and he wrote the first version of Townie, a chatbot that is able to write code.
Townie has been really interesting. Basically it lets you write vals (our word for apps) with plain English. This development happened around the same time as a lot of the ‘vibe-coding’ applications, like Bolt and Lovable. But Townie was attached to a platform that runs code and has community elements and a lot more. It’s an entry point to the rest of the product, while a lot of other vibe-coding tools were the core product that would eventually expand to include stuff like what Val Town provides.
Ethan Ding has written a few things about this: it’s maybe preferable to sell compute instead of being the frontend for LLM-vibe-coding. But that’s sort of a long-run prediction about where value accrues rather than an observation about what companies are getting hype and funding in the present.

There are way too many companies providing vibe-coding tools without having a moat or even a pathway to positive margins. But having made a vibe-coding tool, I completely see why: it makes charts look amazing. Townie was a huge growth driver for a while, and a lot of people were hearing about Townie first, and only later realizing that Val Town could run code, act as a lightweight GitHub alternative, and power a community.
Unlike a lot of AI startups, we didn’t burn a ton of money running Townie. We did have negative margins on it, but to the tune of a few thousand dollars a month during the most costly months.
Introducing a pro plan made it profitable pretty quickly and today Townie is pay-as-you-go, so it doesn’t really burn money at all. But on the flip side, we learned a lot about the users of vibe-coding tools. In particular, they use the tools a lot, and they really don’t want to pay for them. This kind of makes sense: vibe-coding actual completed apps without ever dropping down to write or read code is zeno’s paradox: every prompt gets you halfway there, so you inch closer and closer but never really get to your destination.
So you end up chatting for eight hours, typically getting angrier and angrier, and using a lot of tokens. This would be great for business in theory, but in practice it doesn’t work for obvious reasons: people like to pay for results, not the process. Vibe-coding is a tough industry - it’s simultaneously one of the most expensive products to run, and one of the most flighty and cost-sensitive user-bases I’ve encountered.
So AI has been complicated. On one hand, it’s amazing for growth and obviously has spawned wildly successful startups. On the other, it can be a victim of its own expectations: every company seems to promise perfect applications generated from a single prompt and that just isn’t the reality. And that results in practically every tool falling short of those expectations and thus getting the rough end of user sentiment.
We’re about to launch MCP support, which will make it possible to use Val Town via existing LLM interfaces like Claude Code. It’s a lot better than previous efforts - more powerful and flexible, plus it requires us to reinvent less of the wheel. The churn in the ‘state of the art’ feels tremendous: first we had tool-calling, then MCPs, then tool calling writing code to call MCPs: it’s hard to tell if this is fast progress or just churn.
When is a company supposed to make money? It’s a question that I’ve thought about a lot. When I was running a bootstrapped startup, the answer was obviously as soon as possible, because I’d like to stop paying my rent from my bank account. Venture funding lets you put that off for a while, sometimes a very long while, and then when companies start making real revenue they at best achieve break-even. There are tax and finance reasons for all of this – I don’t make the rules!
Anyway, Val Town is far from break-even. But that’s the goal for 2026, and it’s optimistically possible.
One thing I’ve thought for a long time is that people building startups are building complicated machines. They carry out a bunch of functions, maybe they proofread your documents or produce widgets, or whatever, but the machine also has a button on it that says “make money.” And everything kind of relates to that button as you’re building it, but you don’t really press it.
The nightmare is if the rest of the machine works, you press the button, and it doesn’t do anything. You’ve built something useful but not valuable. This hearkens back to the last section about AI: you can get a lot of people using the platform, but if you ask them for money and they’re mostly teenagers or hobbyists, they’re not going to open their wallets. They might not even have wallets.
So we pressed the button. It kind of works.
But what I’ve learned is that making revenue is a lot like engineering: it requires a lot of attempts, testing, and hard work. It’s not something that just results from a good product. Here’s where I really saw Charmaine and Steve at work, on calls, making it happen.
The angle right now is to sell tools for ‘Go To Market’ - stuff like capturing user signups of your website, figuring out which users are from interesting companies or have interesting use-cases, and forwarding that to Slack, pushing it to dashboards, and generally making the sales pipeline work. It’s something Val Town can do really well: most other tools for this kind of task have some sort of limit in how complicated and custom they can get, and Val Town doesn’t.
Product-wise, the big thing about Val Town that has evolved is that it can do more stuff and it’s more normal. When we started out, a Val was a single JavaScript expression - this was part of what made Val Town scale down so beautifully and be so minimal, but it was painfully limited. Basically people would type into the text box
const x = 10;
function hi() {};
console.log(1);
And we couldn’t handle that at all: if you ran the Val did it run that function? Export the x variable? It was magic but too confusing. The other tricky niche choice was that we had a custom import syntax like this:
@tmcw.helper(10);
In which @tmcw.helper was the name of another val and this would automatically import and use it. Extremely slick but really tricky to build off of because this was non-standard syntax, and it overlapped with the proposed syntax for decorators in JavaScript. Boy, I do not love decorators: they have been under development for basically a decade and haven’t landed, just hogging up this part of the unicode plane.
But regardless this syntax wasn’t worth it. I have some experience with this problem and have landed squarely on the side of normality is good.
So, in October 2023, we ditched it, adopted standard ESM import syntax, and became normal. This is was a big technical undertaking, in large part because we tried to keep all existing code running by migrating it. Thankfully JavaScript has a very rich ecosystem of tools that can parse & produce code and manipulate syntax trees, but it was still a big, dramatic shift.
This is one of the core tensions of Val Town as well as practically every startup: where do you spend your user-facing innovation energy?
I’m a follower of the use boring technology movement when it comes to how products are built: Val Town intentionally uses some boring established parts like Postgres and React Router, but what about when it comes to the product itself? I’ve learned the hard way that most of what people call intuition is really familiarity: it’s good when an interface behaves like other interfaces. A product that has ten new concepts and a bunch of new UI paradigms is going to be hard to learn and probably will lose out to one that follows some familiar patterns.
Moving to standard JavaScript made Val Town more learnable for a lot of people while also removing some of its innovation. Now you can copy code into & out of Val Town without having to adjust it. LLMs can write code that targets Val Town without knowing everything about its quirks. It’s good to go with the flow when it comes to syntax.

Val Town has an office. I feel like COVID made everything remote by default and the lower-COVID environment that we now inhabit (it’s still not gone!) has led to a swing-back, but the company was founded in the latter era and has never been remote. So, we work from home roughly every other Friday.
This means that we basically try to hire people in New York. It hasn’t been too hard in the past. About 6% of America lives in the New York City metro area and the Northeast captures about 23% of venture funding, so there are lots of people who live here or want to.

Here’s something hard to publish: we’re currently at three people. It was five pretty recently. Charmaine got poached by Anthropic where she’ll definitely kick ass, and Max is now at Cloudflare where he’s writing C++, which will be even more intimidating than his chess ranking. The company’s really weirdly good at people leaving: we had parties and everyone exchanges hand-written cards. How people handle hard things says a lot.
But those three are pretty rad: Jackson was a personal hero of mine before we hired him (he still is). He’s one of the best designers I’ve worked with, and an incredibly good engineer to boot. He’s worked at a bunch of startups you’ve heard of, had a DJ career, gotten to the highest echelons of tech without acquiring an ego. He recently beat me to the top spot in our GitHub repo’s lines-changed statistic.
Steve has what it takes for this job: grit, optimism, curiosity. The job of founding a company and being a CEO is a different thing every few months - selling, hiring, managing, promoting. Val Town is a very developer-consumer oriented product and that kind of thing requires a ton of promotion. Steve has done so much, in podcasts, spreading the word in person, writing, talking to customers. He has really put everything into this. A lot of the voice and the attitude of the company flows down from the founder, and Steve is that.
In particular, for someone to be a customer-facing technical promoter type - now called a “GTM” hire. Basically, who can write a bit of code but has the attitude of someone in sales. Who can see potential and handle rejection. Not necessarily the world’s best programmer, but who can probably code, and definitely someone who can write. Blogging and writing online is a huge green flag for this position.
And the other role that we really need is an “application engineer.” These terms keep shifting, so if full-stack engineer means more, sure, that too. Basically someone who can write code across boundaries. This is more or less what Jackson and I do - writing queries, frontend code, fixing servers, the whole deal. Yeah, it sounds like a lot but this is how all small companies operate, and I’ve made a lot of decisions to make this possible: we’ve avoided complexity like the plague in Val Town’s stack, so it should all be learnable. I’ve written a bunch of documentation for everything, and constantly tried to keep the codebase clean.
Sidenote, but even though I think that the codebase is kind of messy, I’ve heard from very good engineers (even the aforementioned JP Posma) that it’s one of the neatest and most rational codebases they’ve seen. Maybe it is, maybe it isn’t, see for yourself!
Tech hiring has been broken the whole time I’ve been in the industry, for reasons that would take a whole extra article to ponder. But one thing that makes it hard is vagueness, both on the part of applicants and companies. I get it - cast a wide net, don’t put people off. But I can say that:
The company’s pretty low drama. Our office is super nice. We work hard but not 996. We haven’t had dinner in the office. But we all do use PagerDuty so when the servers go down, we wake up and it sucks. Thankfully the servers go down less than they used to.
We all get paid the same: $175k. Lower than FAANG, but pretty livable for Brooklyn. Both of the jobs listed - the Product Engineer, and Growth Engineer - are set at 1% equity. $175k is kind of high-average for where we’re at, but 1% in my opinion is pretty damn good. Startups say that equity is “meaningful” at all kinds of numbers but it’s definitely meaningful at that one. If Val Town really succeeds, you can get pretty rich off of that.
Of course, will it succeed? It’s something I think about all the time. I was born to ruminate. We have a lot going for us, and a real runway to make things happen. Some of the charts in our last investor update looked great. Some days felt amazing. Other days were a slog. But it’s a good team, with a real shot of making it.
Hello! Only a day late this time. October was another busy month but it didn’t yield much content. I ran a second half-marathon, this time with much less training, but only finished a few minutes slower than I did earlier this year. Next year I’m thinking about training at a normal mileage for the kinds of races I’m running - 25 miles or so per week instead of this year’s roughly 15.
And speaking of running, I just wrote up this opinion I have about how fewer people should run marathons.
I enjoyed reading Why Functor Doesn’t Matter, but I don’t really agree. The problem that I had with functional programming jargon isn’t that the particular terms are strange or uncommon, but that their definitions rely on a series of other jargon terms, and the discipline tends to omit good examples, metaphors, or plain-language explanations. It’s not that the strict definition is bad, but when a function is defined as _a mapping that associates a morphism F: X -> Y in category C to a morphism F(f): F(X) -> F(Y), in category D, you now have to define morphism, categories, and objects, and all of which have domain-specific definitions.
I am loving Sherif’s posting about building a bike from the frame up.
Maximizers are biased to speed, optionality, breadth, momentum, opportunism, parallel bets, hype, luck exposure, momentum, “Why not both?”, “Better to move fast than wait for perfect”. Maximizers want to see concrete examples before they’ll make tradeoffs. They anchor decisions in the tangible. “Stop making things so complicated.” “Stop overthinking.”
Focusers are biased to focus, coherence, depth, meaningful constraints, doing less for more, sequential experiments, intentionality, sustainability, “What matters most?”, compounding clarity. Focusers are comfortable with abstraction. A clear constraint or principle is enough to guide them. “Stop mistaking chaos for progress.” “Stop overdoing.”
John Cutler’s post about maximizers vs. focusers matches my experience in tech. Like many young engineers, I think I started out as a focuser and have tried to drift to the center all the time, but the tension both internally and interpersonally at every job is present.
I recently remarked to a friend that traveling abroad after the advent of the smartphone feels like studying biology after the advent of microplastics. It has touched every aspect of life. No matter where you point your microscope you will see its impact.
Josh Erb’s blog about living in India is great, personal, a classic blog’s blog.
For me, the only reason to keep going is to try and make AI a wonderful technology for the world. Some feel the same. Others are going because they’re locked in on a path to generational wealth. Plenty don’t have either of these alignments, and the wall of effort comes sooner.
This article about AI researchers working all the time and burning out is interesting, in part because I find the intention of AI researchers so confusing. I can see the economic intention: these guys are making bank! Congrats to all of them. But it’s so rare to talk to anyone who has a concrete idea about how they are making the world better by doing what they’re doing, and that’s the reason why they’re working so hard. OpenAI seems to keep getting distracted from that cancer cure, and their restructuring into a for-profit company kind of indicates that there’s more greed than altruism in the mix.
every vc who bet on the modern data stack watched their investments get acquired for pennies or go to zero. the only survivors: the warehouses themselves, or the companies the warehouses bought to strengthen their moats.
It’s niche, but this article about Snowflake, dbt, fivetran, and other ‘data lake’ architecture is really enlightening.
Totorro’s new album was the only one I picked up this month. It’s pretty good math-rock, very energetic and precise.
Speaking of weird, Ben Levin’s gesamtkunstwerk videos are wild and glorious.
You might have seen an article making the rounds this week, about a young man who ended his life after ChatGPT encouraged him to do so. The chat logs are really upsetting.
Someone two degrees removed from me took their life a few weeks ago. A close friend related the story to me, about how this person had approached their neighbor one evening to catch up, make small talk, and casually discussed their suicidal ideation at some length. At the end of the conversation, they asked to borrow a rope, and their neighbor agreed without giving the request any critical thought. The neighbor found them the next morning.
I didn’t know the deceased, nor their neighbor, but I’m close friends with someone who knew both. I found their story deeply chilling – ice runs through my veins when I imagine how the neighbor must have felt. I had a similar feeling upon reading this article, wondering how the people behind ChatGPT and tools like it are feeling right now.
Two years ago, someone I knew personally took their life as well. I was not friendly with this person – in fact, we were on very poor terms. I remember at the time, I had called a crisis hotline just to ask an expert for advice on how to break this news to other people in my life, many of whom were also on poor terms with a person whose struggles to cope with their mental health issues caused a lot of harm to others.
None of us had to come to terms with any decisions with the same gravity as what that unfortunate neighbor had to face. None of us were ultimately responsible for this person’s troubles or were the impetus for what happened. Nonetheless, the uncomfortable and confronting feelings I experienced in the wake of that event perhaps give me some basis for empathy and understanding towards the neighbor, or for OpenAI employees, and others who find themselves in similar situations.
If you work on LLMs, well… listen, I’ve made my position as an opponent of this technology clear. I feel that these tools are being developed and deployed recklessly, and I believe tragedy is the inevitable result of that recklessness. If you confide in me, I’m not going to validate your career choice. But maybe that’s not necessarily a bad quality to have in a confidant? I still feel empathy towards you and I recognize your humanity and our need to acknowledge each other as people.
If you feel that I can help, I encourage you to reach out. I will keep our conversation in confidence, and you can reach out anonymously if that makes you feel safer. I’m a good listener and I want to know how you’re doing. Email me.
If you’re experiencing a crisis, 24-hour support is available from real people who are experts in getting you the help you need. Please consider reaching out. All you need to do is follow the link.
Outages, you say? Of course I have stories about outages, and limits, and some limits causing outages, and other things just screwing life up. Here are some random thoughts which sprang to mind upon reading this morning's popcorn-fest.
...
I was brand new at a company that "everybody knew" had AMAZING infrastructure. They could do things with Linux boxes that nobody else could. As part of the new employee process, I had to get accounts in a bunch of systems, and one of them was this database used to track the states of machines. It was where you could look to see if it was (supposed to be) serving, or under repair, or whatever. You could also see (to some degree) what services were supposed to be running on it, and what servers (that is, actual programs), the port numbers, and whether all of that stuff was synced to the files on the box or not.
My request didn't go through for a while, and I found out that it had something to do with my employee ID being a bit over 32767. And yeah, for those of you who didn't just facepalm at seeing that number, that's one of those "magic numbers" which pops up a bunch when talking about limits. That one is what you get when you try to store numbers as 16 bit values... with a sign to allow negative values. Why you'd want a negative employee number is anyone's guess, but that's how they configured it.
I assume they fixed the database schema at some point to allow more than ~15 bits of employee numbers, but they did an interesting workaround to get me going before then. They just shaved off the last digit and gave me that ID in their system instead. I ended up as 34xx instead of 34xxx, more or less.
This was probably my first hint that their "amazing infra" was in fact the same kind of random crazytown as everywhere else once you got to see behind the curtain.
...
Then there was the time that someone decided that a log storage system that had something like a quarter of a million machines (and growing fast) feeding it needed a static configuration. The situation unfolded like this:
(person 1) Hey, why is this thing crashing so much?
(person 2) Oh yeah, it's dumping cores constantly! Wow!
(person 1) It's running but there's nothing in the log?
(person 2) Huh, "runtime error ... bad id mapping?"
(person 2) It's been doing this for a month... and wait, other machines are doing it, too!
(person 1) Guess I'll dig into this.
(person 2) "range name webserv_log.building1.phase3 range [1-20000]"
(person 2) But this machine is named webserv20680...
(person 2) Yeah, that's enough for me. Bye!
The machines were named with a ratcheting counter: any time they were assigned to be a web server, they got names like "webserv1", "webserv2", ... and so on up the line. That had been the case all along.
Whoever designed this log system years later decided to put a hard-coded limiter into it. I don't know if they did it because they wanted to feel useful every time it broke so they could race in and fix it, or if they didn't care, or if they truly had no idea that numbers could in fact grow beyond 20000.
Incidentally, that particular "building1.phase3" location didn't even have 20000 machines at that specific moment. It had maybe 15000 of them, but as things went away and came back, the ever-incrementing counter just went up and up and up. So, there _had been_ north of 20K machines in that spot overall, and that wasn't even close to a surprising number.
...
There was a single line that would catch obvious badness at a particular gig where we had far too many Apache web servers running on various crusty Linux distributions:
locate access_log | xargs ls -la | grep 2147
It was what I'd send in chat to someone who said "hey, the customer's web server won't stay up". The odds were very good that they had a log file that had grown to 2.1 GB, and had hit a hard limit which was present in that particular system. Apache would try to write to it, that write would fail, and the whole process would abort.
"2147", of course, is the first 4 digits of the expected file size: 2147483647 ... or (2^31)-1.
Yep, that's another one of those "not enough bits" problems like the earlier story, but this one is 32 bits with one of them being for the sign, not 16 like before. It's the same problem, though: the counter maxes out and you're done.
These days, files can get quite a bit bigger... but you should still rotate your damn log files once in a while. You should probably also figure out what's pooping in them so much and try to clean that up, too!
...
As the last one for now, there was an outage where someone reported that something like half of their machines were down. They had tried to do a kernel update, and wound up hitting half of them at once. I suspect they wanted to do a much smaller quantity, but messed up and hit fully half of them somehow. Or, maybe they pointed it at all of them, and only half succeeded at it. Whatever the cause, they now had 1000 freshly-rebooted machines.
The new kernel was fine, and the usual service manager stuff came back up, and it went to start the workload for those systems, and then it would immediately crash. It would try to start it again. It would crash again. Crash crash crash. This is why we call it "crashlooping".
Finally, the person in question showed up in the usual place where we discussed outages, and started talking about what was going on.
(person 1) Our stuff isn't coming back.
(person 2) Oh yeah, that's bad, they're all trying to start.
(person 1) Start, abort, start, abort, ...
(person 2) Yep, aborting... right about here: company::project::client::BlahClient::loadConfig ... which is this code: <paste>
(person 2) It's calling "get or throw" on a map for an ID number...
(person 1) My guess is the config provider service isn't running.
(person 2) It's there... it's been up for 30 minutes...
(person 1) Restarting the jobs.
(person 2) Nooooooooooo...
<time passes>
(person 2) Why is there no entry for number 86 in the map in the config?
(person 1) Oh, I bet it's problems with port takeover.
(person 3) I think entry 86 is missing from <file>.
(person 2) Definitely is missing.
(person 4) Hey everyone, we removed that a while back. Why would it only be failing now?
(person 2) It's only loaded at startup, right?
(person 4) Right.
(person 2) So if they were running for a long time, then it changed, then they're toast after a restart...
(person 3) Hey, this change looks related.
(person 4) I'm going to back that out.
This is a common situation: program A reads config C. When it starts up, config C is on version C1, and everything is fine. While A is running, the config is updated from C1 to C2, but nothing notices. Later, A tries to restart and it chokes on the C2 config, and refuses to start.
Normally, you'd only restart a few things to get started, and you'd notice that your program can't consume the new config at that point. You'd still have a few instances down, but that's it - a *few* instances. Your service should keep running on whatever's left over that you purposely didn't touch.
This is why you strive to release things in increments.
Also, it helps when programs notice config changes while they're running, so this doesn't sneak up on you much later when you're trying to restart. If the programs notice the bad config right after the change is made, it's *far* easier to correlate it to the change just by looking at the timeline.
Tuesday, 11:23:51: someone applies change.
Tuesday, 11:23:55: first 1% of machines which subscribe to the change start complaining.
... easy, right? Now compare it to this:
Tuesday, November 18: someone applies a change
Wednesday, January 7: 50% of machines fail to start "for some reason"
That's a lot harder to nail down.
...
Random aside: restarting the jobs did not help. They were already restarting themselves. "Retry, reboot, reinstall, repeat" is NOT a strategy for success.
It was not the config system being down. It was up the whole time.
It was nothing to do with "port takeover". What does that have to do with a config file being bad?
The evidence was there: the processes were crashing. They were logging a message about WHY they were killing themselves. It included a number they wanted to see, but couldn't find. It also said what part of the code was blowing up.
*That* is where you start looking. You don't just start hammering random things.
I've been studying the standard cell circuitry in the Intel 386 processor recently. The 386, introduced in 1985, was Intel's most complex processor at the time, containing 285,000 transistors. Intel's existing design techniques couldn't handle this complexity and the chip began to fall behind schedule. To meet the schedule, the 386 team started using a technique called standard cell logic. Instead of laying out each transistor manually, the layout process was performed by a computer.
The idea behind standard cell logic is to create standardized circuits (standard cells) for each type of logic element, such as an inverter, NAND gate, or latch. You feed your circuit description into software that selects the necessary cells, positions these cells into columns, and then routes the wiring between the cells. This "automatic place and route" process creates the chip layout much faster than manual layout. However, switching to standard cells was a risky decision since if the software couldn't create a dense enough layout, the chip couldn't be manufactured. But in the end, the 386 finished ahead of schedule, an almost unheard-of accomplishment.1
The 386's standard cell circuitry contains a few circuits that I didn't expect. In this blog post, I'll take a quick look at some of these circuits: surprisingly large multiplexers, a transistor that doesn't fit into the standard cell layout, and inverters that turned out not to be inverters. (If you want more background on standard cells in the 386, see my earlier post, "Reverse engineering standard cell logic in the Intel 386 processor".)
The photo below shows the 386 die with the automatic-place-and-route regions highlighted; I'm focusing on the red region in the lower right. These blocks of logic have cells arranged in rows, giving them a characteristic striped appearance. The dark stripes are the transistors that make up the logic gates, while the lighter regions between the stripes are the "routing channels" that hold the wiring that connects the cells. In comparison, functional blocks such as the datapath on the left and the microcode ROM in the lower right were designed manually to optimize density and performance, giving them a more solid appearance.
As for other features on the chip, the black circles around the border are bond wire connections that go to the chip's external pins. The chip has two metal layers, a small number by modern standards, but a jump from the single metal layer of earlier processors such as the 286. (Providing two layers of metal made automated routing practical: one layer can hold horizontal wires while the other layer can hold vertical wires.) The metal appears white in larger areas, but purplish where circuitry underneath roughens its surface. The underlying silicon and the polysilicon wiring are obscured by the metal layers.
The standard cell circuitry that I'm examining (red box above) is part of the control logic that selects registers while executing an instruction. You might think that it is easy to select which registers take part in an instruction, but due to the complexity of the x86 architecture, it is more difficult. One problem is that a 32-bit register such as EAX can also be treated as the 16-bit register AX, or two 8-bit registers AH and AL. A second problem is that some instructions include a "direction" bit that switches the source and destination registers. Moreover, sometimes the register is specified by bits in the instruction, but in other cases, the register is specified by the microcode. Due to these factors, selecting the registers for an operation is a complicated process with many cases, using control bits from the instruction, from the microcode, and from other sources.
Three registers need to be selected for an operation—two source registers and a destination register—and there are about 17 cases that need to be handled. Registers are specified with 7-bit control signals that select one of the 30 registers and control which part of the register is accessed. With three control signals, each 7 bits wide, and about 17 cases for each, you can see that the register control logic is large and complicated. (I wrote more about the 386's registers here.)
I'm still reverse engineering the register control logic, so I won't go into details. Instead, I'll discuss how the register control circuit uses multiplexers, implemented with standard cells. A multiplexer is a circuit that combines multiple input signals into a single output by selecting one of the inputs.2 A multiplexer can be implemented with logic gates, for instance, by ANDing each input with the corresponding control line, and then ORing the results together. However, the 386 uses a different approach—CMOS switches—that avoids a large AND/OR gate.
The schematic above shows how a CMOS switch is constructed from two MOS transistors. When the two transistors are on, the output is connected to the input, but when the two transistors are off, the output is isolated. An NMOS transistor is turned on when its input is high, but a PMOS transistor is turned on when its input is low. Thus, the switch uses two control inputs, one inverted. The motivation for using two transistors is that an NMOS transistor is better at pulling the output low, while a PMOS transistor is better at pulling the output high, so combining them yields the best performance.3 Unlike a logic gate, the CMOS switch has no amplification, so a signal is weakened as it passes through the switch. As will be seen below, inverters can be used to amplify the signal.
The image below shows how CMOS switches appear under the microscope. This image is very hard to interpret because the two layers of metal on the 386 are packed together densely, but you can see that some wires run horizontally and others run vertically. The bottom layer of metal (called M1) runs vertically in the routing area, as well as providing internal wiring for a cell. The top layer of metal (M2) runs horizontally; unlike M1, the M2 wires can cross a cell. The large circles are vias that connect the M1 and M2 layers, while the small circles are connections between M1 and polysilicon or M1 and silicon. The central third of the image is a column of standard cells with two CMOS switches outlined in green. The cells are bordered by the vertical ground rail and +5V rail that power the cells. The routing areas are on either side of the cells, holding the wiring that connects the cells.
Removing the metal layers reveals the underlying silicon with a layer of polysilicon wiring on top. The doped silicon regions show up as dark outlines. I've drawn the polysilicon in green; it forms a transistor (brighter green) when it crosses doped silicon. The metal ground and power lines are shown in blue and red, respectively, with other metal wiring in purple. The black dots are vias between layers. Note how metal wiring (purple) and polysilicon wiring (green) are combined to route signals within the cell. Although this standard cell is complicated, the important thing is that it only needs to be designed once. The standard cells for different functions are all designed to have the same width, so the cells can be arranged in columns, snapped together like Lego bricks.
To summarize, this switch circuit allows the input to be connected to the output or disconnected, controlled by the select signal. This switch is more complicated than the earlier schematic because it includes two inverters to amplify the signal. The data input and the two select lines are connected to the polysilicon (green); the cell is designed so these connections can be made on either side. At the top, the input goes through a standard two-transistor inverter. The lower left has two transistors, combining the NMOS half of an inverter with the NMOS half of the switch. A similar circuit on the right combines the PMOS part of an inverter and switch. However, because PMOS transistors are weaker, this part of the circuit is duplicated.
A multiplexer is constructed by combining multiple switches, one for each input. Turning on one switch will select the corresponding input. For instance, a four-to-one multiplexer has four switches, so it can select one of the four inputs.
The schematic above shows a hypothetical multiplexer with four inputs. One optimization is that if an input is always 0, the PMOS transistor can be omitted. Likewise, if an input is always 1, the NMOS transistor can be omitted. One set of select lines is activated at a time to select the corresponding input. The pink circuit selects 1, green selects input A, yellow selects input B, and blue selects 0. The multiplexers in the 386 are similar, but have more inputs.
The diagram below shows how much circuitry is devoted to multiplexers in this block of standard cells. The green, purple, and red cells correspond to the multiplexers driving the three register control outputs. The yellow cells are inverters that generate the inverted control signals for the CMOS switches. This diagram also shows how the automatic layout of cells results in a layout that appears random.
The idea of standard-cell logic is that standardized cells are arranged in columns. The space between the cells is the "routing channel", holding the wiring that links the cells. The 386 circuitry follows this layout, except for one single transistor, sitting between two columns of cells.
I wrote some software tools to help me analyze the standard cells. Unfortunately, my tools assumed that all the cells were in columns, so this one wayward transistor caused me considerable inconvenience.
The transistor turns out to be a PMOS transistor, pulling a signal high as part of a multiplexer. But why is this transistor out of place? My hypothesis is that the transistor is a bug fix. Regenerating the cell layout was very costly, taking many hours on an IBM mainframe computer. Presumably, someone found that they could just stick the necessary transistor into an unused spot in the routing channel, manually add the necessary wiring, and avoid the delay of regenerating all the cells.
The simplest CMOS gate is the inverter, with an NMOS transistor to pull the output low and a PMOS transistor to pull the output high. The standard cell circuitry that I examined contains over a hundred inverters of various sizes. (Performance is improved by using inverters that aren't too small but also aren't larger than necessary for a particular circuit. Thus, the standard cell library includes inverters of multiple sizes.)
The image below shows a medium-sized standard-cell inverter under the microscope. For this image, I removed the two metal layers with acid to show the underlying polysilicon (bright green) and silicon (gray). The quality of this image is poor—it is difficult to remove the metal without destroying the polysilicon—but the diagram below should clarify the circuit. The inverter has two transistors: a PMOS transistor connected to +5 volts to pull the output high when the input is 0, and an NMOS transistor connected to ground to pull the output low when the input is 1. (The PMOS transistor needs to be larger because PMOS transistors don't function as well as NMOS transistors due to silicon physics.)
The polysilicon input line plays a key role: where it crosses the doped silicon, a transistor gate is formed. To make the standard cell more flexible, the input to the inverter can be connected on either the left or the right; in this case, the input is connected on the right and there is no connection on the left. The inverter's output can be taken from the polysilicon on the upper left or the right, but in this case, it is taken from the upper metal layer (not shown). The power, ground, and output lines are in the lower metal layer, which I have represented by the thin red, blue, and yellow lines. The black circles are connections between the metal layer and the underlying silicon.
This inverter appears dozens of times in the circuitry. However, I came across a few inverters that didn't make sense. The problem was that the inverter's output was connected to the output of a multiplexer. Since an inverter is either on or off, its value would clobber the output of the multiplexer.4 This didn't make any sense. I double- and triple-checked the wiring to make sure I hadn't messed up. After more investigation, I found another problem: the input to a "bad" inverter didn't make sense either. The input consisted of two signals shorted together, which doesn't work.
Finally, I realized what was going on. A "bad inverter" has the exact silicon layout of an inverter, but it wasn't an inverter: it was independent NMOS and PMOS transistors with separate inputs. Now it all made sense. With two inputs, the input signals were independent, not shorted together. And since the transistors were controlled separately, the NMOS transistor could pull the output low in some circumstances, the PMOS transistor could pull the output high in other circumstances, or both transistors could be off, allowing the multiplexer's output to be used undisturbed. In other words, the "inverter" was just two more cases for the multiplexer.
If you compare the "bad inverter" cell below with the previous cell, they look almost the same, but there are subtle differences. First, the gates of the two transistors are connected in the real inverter, but disconnected by a small gap in the transistor pair. I've indicated this gap in the photo above; it is hard to tell if the gap is real or just an imaging artifact, so I didn't spot it. The second difference is that the "fake" inverter has two input connections, one to each transistor, while the inverter has a single input connection. Unfortunately, I assumed that the two connections were just a trick to route the signal across the inverter without requiring an extra wire. In total, this cell was used 32 times as a real inverter and 9 times as independent transistors.
Standard cell logic and automatic place and route have a long history before the 386, back to the early 1970s, so this isn't an Intel invention.5 Nonetheless, the 386 team deserves the credit for deciding to use this technology at a time when it was a risky decision. They needed to develop custom software for their placing and routing needs, so this wasn't a trivial undertaking. This choice paid off and they completed the 386 ahead of schedule. The 386 ended up being a huge success for Intel, moving the x86 architecture to 32 bits and defining the dominant computer architecture for the rest of the 20th century.
If you're interested in standard cell logic, I also wrote about standard cell logic in an IBM chip. I plan to write more about the 386, so follow me on Mastodon, Bluesky, or RSS for updates. Thanks to Pat Gelsinger and Roxanne Koester for providing helpful papers.
For more on the 386 and other chips, follow me on Mastodon (@kenshirriff@oldbytes.space), Bluesky (@righto.com), or RSS. (I've given up on Twitter.) If you want to read more about the 386, I've written about the clock pin, prefetch queue, die versions, packaging, and I/O circuits.
The decision to use automatic place and route is described on page 13 of the Intel 386 Microprocessor Design and Development Oral History Panel, a very interesting document on the 386 with discussion from some of the people involved in its development. ↩
Multiplexers often take a binary control signal to select the desired input. For instance, an 8-to-1 multiplexer selects one of 8 inputs, so a 3-bit control signal can specify the desired input. The 386's multiplexers use a different approach with one control signal per input. One of the 8 control signals is activated to select the desired input. This approach is called a "one-hot encoding" since one control line is activated (hot) at a time. ↩
Some chips, such as the MOS Technology 6502 processor, are built with NMOS technology, without PMOS transistors. Multiplexers in the 6502 use a single NMOS transistor, rather than the two transistors in the CMOS switch. However, the performance of the switch is worse. ↩
One very common circuit in the 386 is a latch constructed from an inverter loop and a switch/multiplexer. The inverter's output and the switch's output are connected together. The trick, however, is that the inverter is constructed from special weak transistors. When the switch is disabled, the inverter's weak output is sufficient to drive the loop. But to write a value into the latch, the switch is enabled and its output overpowers the weak inverter.
The point of this is that there are circuits where an inverter and a multiplexer have their outputs connected. However, the inverter must be constructed with special weak transistors, which is not the situation that I'm discussing. ↩
I'll provide more history on standard cells in this footnote. RCA patented a bipolar standard cell in 1971, but this was a fixed arrangement of transistors and resistors, more of a gate array than a modern standard cell. Bell Labs researched standard cell layout techniques in the early 1970s, calling them Polycells, including a 1973 paper by Brian Kernighan. By 1979, A Guide to LSI Implementation discussed the standard cell approach and it was described as well-known in this patent application. Even so, Electronics called these design methods "futuristic" in 1980.
Standard cells became popular in the mid-1980s as faster computers and improved design software made it practical to produce semi-custom designs that used standard cells. Standard cells made it to the cover of Digital Design in August 1985, and the article inside described numerous vendors and products. Companies like Zymos and VLSI Technology (VTI) focused on standard cells. Traditional companies such as Texas Instruments, NCR, GE/RCA, Fairchild, Harris, ITT, and Thomson introduced lines of standard cell products in the mid-1980s. ↩
I left a loose end the other day when I said that AI is about intent and context.
That was when I said "what’s context at inference time is valuable training data if it’s recorded."
But I left it at that, and didn’t really get into why training data is valuable.
I think we often just draw a straight arrow from “collect training data,” like ingest pages from Wikipedia or see what people are saying to the chatbot, to “now the AI model is better and therefore it wins.”
But I think it’s worth thinking about what that arrow actually means. Like, what is the mechanism here?
Now all of this is just my mental model for what’s going on.
With that caveat:
To my mind, the era-defining AI company is the one that is the first to close two self-accelerating loops.
Both are to do with training data. The first is the general theory; the second is specific.
When I say era-defining companies, to me there’s an era-defining idea, or at least era-describing, and that’s Nick Srnicek’s concept of Platform Capitalism (Amazon).
It is the logic that underpins the success of Uber, Facebook, Amazon, Google search (and in the future, Waymo).
I’ve gone on about platform capitalism before (2020) but in a nutshell Srnicek describes a process whereby
Even to the point that in 2012 Amazon filed a patent on anticipatory shipping in 2012 (TechCrunch) in which, if you display a strong intent to buy laundry tabs, they’ll put them on a truck and move them towards your door, only aborting delivery if you end up not hitting the Buy Now button.
And this is also kinda how Uber works right?
Uber has a better matching algorithm than you keeping the local minicab company on speed dial on your phone, which only works when you’re in your home location, and surge pricing moves drivers to hotspots in anticipation of matching with passengers.
And it’s how Google search works.
They see what people click on, and use that to improve the algo which drives marketplace activity, and AdSense keyword cost incentivises new entrants which increases marketplace size.
So how do marketplace efficiency and marketplace size translate to, say, ChatGPT?
ChatGPT can see what success looks like for a “buyer” (a ChatGPT user).
They generate an answer; do users respond well to it or not? (However that is measured.)
So that usage data becomes training data to improve the model to close the gap between user intent and transaction.
Right now, ChatGPT itself is the “seller”. To fully close the loop, they’ll need to open up to other sellers and ChatGPT itself transitions to being the market-maker (and taking a cut of transactions).
And you can see that process with the new OpenAI shopping feature right?
This is the template for all kinds of AI app products: anything that people want, any activity, if there’s a transaction at the end, the model will bring buyers and sellers closer together – marketplace efficiency.
Also there is marketplace size.
Product discovery: OpenAI can see what people type into ChatGPT. Which means they know how to target their research way better than the next company which doesn’t have access to latent user needs like that.
So here, training data for the model mainly comes from usage data. It’s a closed loop.
But how does OpenAI (or whoever) get the loop going in the first place?
With some use cases, like (say) writing a poem, the “seed” training data was in the initial web scrape; with shopping the seed training data came as a result of adding web search to chat and watching users click on links.
But there are more interesting products…
How do product managers triage tickets?
How do plumbers do their work?
You can get seed training data for those products in a couple ways but I think there’s an assumption that what happens is that the AI companies need to trick people out of their data by being present in their file system or adding an AI agent to their SaaS software at work, then hiding something in the terms of service that says the data can be used to train future models.
I just don’t feel like that assumption holds, at least not for the biggest companies.
Alternate access to seed training data method #1: just buy it.
I’ll take one example which is multiplayer chat. OpenAI just launched group chat in ChatGPT:
We’ve also taught ChatGPT new social behaviors for group chats. It follows the flow of the conversation and decides when to respond and when to stay quiet based on the context of the group conversation.
Back in May I did a deep dive into multiplayer AI chat. It’s really complicated. I outlined all the different parts of conversational turn taking theory that you need to account for to have a satisfying multiplayer conversation.
What I didn’t say at the end of that post was that, if I was building it, the whole complicated breakdown that I provided is not what I would do.
Instead I would find a big corpus of group chats for seed data and just train the model against that.
And it wouldn’t be perfect but it would be good enough to launch a product, and then you have actual live usage data coming in and you can iteratively train from there.
Where did that seed data come from for OpenAI? I don’t know. There was that reddit deal last year, maybe it was part of the bundle.
So they can buy data.
Or they can make it.
Alternate access to seed training data #2: cosplay it.
Every so often you hear gossip about how seed training data can be manufactured… I remember seeing a tweet about this a few months ago and now there’s a report:
AI agents are being trained on clones of SaaS products.
According to a new @theinformation report, Anthropic and OpenAI are building internal clones of popular SaaS apps so that they can train AI agents how to use them.
Internal researchers are giving the agents cloned, fake versions of products like Zendesk and Salesforce to teach the agents how to perform the tasks that white collar workers currently do.
The tweet I ran across was from a developer saying that cloning business apps for the purpose of being used in training was a sure-fire path to a quick acquisition, but that it felt maybe not ok.
My point is that AI companies don’t need sneak onto computers to watch product managers triaging tickets in Linear. Instead, given the future value is evident, it’s worth it to simply build a simulation of Linear, stuff it with synthetic data, then pay fake product managers to cosplay managing product inside fake Linear, and train off that.
Incidentally, the reason I keep saying seed training data is that the requirement for it is one-off. Once the product loop has started, the product creates it own. Which is why I don’t believe that revenue from licensing social network data or scientific paper is real. There will be a different pay-per-access model in the future.
I’m interested in whether this model extends to physical AI.
Will they need lanyards around the necks of plumbers in order to observe plumbing and to train the humanoid robots of the future?
Or will it be more straightforward to scrape YouTube plumbing tutorials to get started, and then build a simulation of a house (physical or virtual, in Unreal Engine) and let the AI teach itself?
What I mean is that AI companies need access to seed training data, but where it comes from is product-dependent and there are many ways to skin a cat.
That’s loop #1 – a LLM-mediated marketplace loop that (a) closes on transactions and (b) throws off usage data that improves market efficiency and reveals other products.
Per-product seed training data is a one-off investment for the AI company and can be found in many ways.
This loop produces cash.
Loop #2 starts with a specific product from loop #1.
A coding product isn’t just a model which is good at understanding and writing code. It has to be wrapped in an agent for planning, and ultimately needs access to collaboration tools, AI PMs, AI user researchers, and all the rest.
I think it’s pretty clear now that coding with an agent is vastly quicker than a human coding on their own. And not just quicker but, from my own experience, I can achieve goals that were previously beyond my grasp.
The loop closes when coding agents accelerate the engineers who are building the coding agents and also, as a side effect, working on the underlying general purpose large language model.
There’s an interesting kind of paperclip maximisation problem here which is, if you’re choosing where to put your resources, do you build paperclip machines or do you build the machines to build the paperclip machines?
Well it seems like all the big AI companies have made the same call right now which is to pile their efforts into accelerating coding, because doing that accelerates everything else.
So those are the two big loops.
Whoever gets those first will win, that’s how I think about it.
I want to add two notes on this.
On training data feeding the marketplace loop:
Running the platform capitalism/marketplace loop is not the only way for a company to participate in the AI product economy.
Another way is to enable it.
Stripe is doing this. They’re working hard to be the default transaction rails for AI agents.
Apple has done this for the last decade or so of the previous platform capitalism loop. iPhone is the place to reach people for all of Facebook, Google, Amazon, Uber and more.
When I said before that AI companies are trying to get closer to the point of intent, part of what I mean I that they are trying to figure out a way that a single hardware company like Apple can’t insert itself into the loop and take its 30%.
Maybe, in the future, device interactions will be super commoditised. iPhone’s power is that is bundles together an interaction surface, connectivity, compute, identity and payment, and we have one each. It’s interesting to imagine what might break that scarcity.
On coding tools that improve coding tools:
How much do you believe in this accelerating, self-improving loop?
The big AI research labs all believe – or at least, if they don’t believe, they believe that the risk of being wrong is worse.
But, if true, “tools that make better tools that allow grabbing bigger marketplaces” is an Industrial Revolution-like driver: technology went from the steam engine to the transistor in less than 200 years. Who knows what will happen this time around.
Because there’s a third loop to be found, and that’s when the models get so good that they can be used for novel R&D, and the AI labs (who have the cash and access to the cheapest compute) start commercialising wheels with weird new physics or whatever.
Or maybe it’ll stall out. Hard to know where the top of the S-curve is.
Auto-detected kinda similar posts:
These past few weeks I’ve been deep in code and doing what I think about as context plumbing.
I’ve been building an AI system and that’s what it feels like.
Let me unpack.
Intent
Loosely AI interfaces are about intent and context.
Intent is the user’s goal, big or small, explicit or implicit.
Uniquely for computers, AI can understand intent and respond in a really human way. This is a new capability! Like the user can type "I want to buy a camera" or point at a keylight and subvocalise "I’ve got a call in 20 minutes" or hit a button labeled "remove clouds" and job done.
Companies care about this because computers that are closer to intent tend to win.
e.g. the smartphone displaced the desktop. On a phone, you see something and then you touch it directly. With a desktop that intent is mediated through a pointer – you see something on-screen but to interact you tell your arm to move the mouse that moves the pointer. Although it doesn’t seem like much your monkey brain doesn’t like it.
So the same applies to user interfaces in general: picking commands from menus or navigating and collating web pages to plan a holiday or remembering how the control panel on your HVAC works. All of that is bureaucracy. Figuring out the sequence for yourself is administrative burden between intent and result.
Now as an AI company, you can overcome that burden. And you want to be present at the very millisecond and in the very location where the user’s intent - desire - arises. You don’t want the user to have the burden of even taking a phone out of their pocket, or having to formulate an unconscious intent into words. Being closest to the origin of intent will crowd out their competitor companies.
That explains the push for devices like AI-enabled glasses or lanyards or mics or cameras that read your body language.
This is why I think the future of interfaces is Do What I Mean: it’s not just a new capability enabled by AI, there’s a whole attentional economics imperative to it.
Context
What makes an AI able to handle intent really, really well is context.
Sure there’s the world knowledge in the large language model itself, which it gets from vast amounts of training data.
But let’s say an AI agent is taking some user intent and hill-climbing towards that goal using a sequence of tool calls (which is how agents work) then it’s going to do way better when the prompt is filled with all kinds of useful context:
For example:
This has given rise to the idea of context engineering (LangChain blog):
Context engineering is building dynamic systems to provide the right information and tools in the right format such that the LLM can plausibly accomplish the task.
btw access to context also explains some behaviour of the big AI companies:
If you want to best answer user intent, then you need to be where the user context is, and that’s why being on a lanyard with an always-on camera is preferred over a regular on-demand camera, and why an AI agent that lives in your email archive is going to be more effective than one that doesn’t. So they really wanna get in there, really cosy up.
(And what’s context at inference time is valuable training data if it’s recorded, so there’s that too.)
Plumbing?
What’s missing in the idea of context engineering is that context is dynamic. It changes, it is timely.
Context appears at disparate sources, by user activity or changes in the user’s environment: what they’re working on changes, emails appear, documents are edited, it’s no longer sunny outside, the available tools have been updated.
This context is not always where the AI runs (and the AI runs as close as possible to the point of user intent).
So the job of making an agent run really well is to move the context to where it needs to be.
Essentially copying data out of one database and putting it into another one – but as a continuous process.
You often don’t want your AI agent to have to look up context every single time it answers intent. That’s slow. If you want an agent to act quickly then you have to plan ahead: build pipes that flow potential context from where it is created to where it’s going to be used.
How can that happen continuously behind the scenes without wasting bandwidth or cycles or the data going stale?
So I’ve been thinking of AI system technical architecture as plumbing the sources and sinks of context.
In the old days of Web 2.0 the go-to technical architecture was a “CRUD” app: a web app wrapping a database where you would have entities and operations to create, read, update, and delete (these are also the HTTP verbs).
This was also the user experience, so the user entity would have a webpage (a profile) and the object entity, say a photo, would have a webpage, and then dynamic webpages would index the entities in different ways (a stream or a feed). And you could decompose webapps like this; the technology and the user understanding aligned.
With AI systems, you want the user to have an intuition about what context is available to it. The plumbing of context flow isn’t just what is technically possible or efficient, but what matches user expectation.
Anyway.
I am aware this is getting - for you, dear reader - impossibly abstract.
But for me, I’m building the platform I’ve been trying to build for the last 2 years only this time it’s working.
I’m building on Cloudflare and I have context flowing between all kinds of entities and AI agents and sub-agents running where they need to run, and none of it feels tangled or confusing because it is plumbed just right.
And I wanted to make a note about that even if I can’t talk specifically, yet, about what it is.
Auto-detected kinda similar posts:
I’m spinning up something new with a buddy and you can guess what it is by what I’ve been writing about recently.
Big picture, there are two visions for the future of computing: cyborgs and rooms. I’m Team Augmented Environments. Mainly because so much of what I care about happens in small groups in physical space: family time, team collaboration, all the rest.
Then what happens when we’re together with AI in these environments? Interfaces will be intent-first so what’s the room-scale OS for all of this? What are the products you find there?
And where do you start?
Anyway we’ve been designing and coding and planning.
“We” is me and Daniel Fogg. We’ve known each other for ages, both done hardware, and he’s been scaling and running Big Things the last few years.
We’re at the point where it’s more straightforward than not to give this thing a name and a landing page…
Yes early days but we need a logo for renders, software protos and the raise deck haha
(Drop me a note if you’d like to chat.)
So say hello to Inanimate and you can find us over here.
Auto-detected kinda similar posts:
I had a look to see when I first mentioned Samuel Arbesman here. It was 2011: the average size of scientific discoveries is getting smaller.
Anyway I’ve been reading his new book, The Magic of Code (official site).
There’s computing history, magic, the simulation hypothesis, and a friendly unpacking of everything from procedural generation to Unix.
And through it all, an enthusiastic appeal to look again at computation, as if to say, look, isn’t it WEIRD! Isn’t it COOL! Because we’ve forgotten that code and computation deserves our wonder. And although this book isn’t an apology for technology ("computing is meant to be for the humans", says Arbesman), it is a reminder - demonstrated chapter by chapter - that wonder, delight and curiosity are there to be found.
(And if we look at computation afresh then we’ll have new ideas about what to do with it.)
Now I’m decently well-read in this kind of stuff.
Yet The Magic of Code is bringing me new-to-me computing lore, which I’m loving.
So, in the spirit of a virtual book tour - an old idea from the internet where book authors would tour blogs instead of book stores, as previously mentioned - I asked Samuel Arbesman for a reading list: 3 books from the Magic of Code bibliography.
(I’ve collected a couple dozen 3 Books reading lists over the years.)
I’ll ask him to introduce himself first…
Samuel! Tell us about yourself?
I’m a scientist and writer playing in the world of venture capital as Lux Capital‘s Scientist in Residence, where I help Lux explore the ever-changing landscape of science and technology, and also host a podcast called The Orthogonal Bet where I get to speak with some of the most interesting thinkers and authors I can find. I also write books about science and tech, most recently The Magic of Code, as well as The Half-Life of Facts and Overcomplicated. The themes in my work are often related to radical interdisciplinarity, intellectual humility in the face of complex technologies and our changing knowledge, and how to use tech to allow us to be the best version of ourselves.
The best way to follow me and what I’m thinking about is my newsletter: Cabinet of Wonders.
I asked for three fave books the bibliography…
I love the history of computing. It’s weird and full of strange turns and dead ends, things worth rediscovering and understanding. But it’s far too easy to forget the historically contingent reasons why we have the technologies that we have (or simply know the paths not taken), and understanding this history-including the history of the ideas that undergird this world-is vital. More broadly, I want everyone in tech to have a “historical sense” and this book is a good place to start: it’s a handbook to seminal ideas and developments in computing, from the ELIZA chatbot and Licklider’s vision of “man-computer symbiosis” to Dijkstra’s hatred of the “go to” command. Because the ideas we are currently grappling with are not necessarily new and they have a deep intellectual pedigree. Want to know the grand mages of computing history and what they thought about? Read this book.
Ideas That Created the Future: Classic Papers of Computer Science: Amazon
I’m pretty sure that I first read this entire book–it’s short–in a single sitting at the library after stumbling upon it. It’s ornery and opinionated about so many computing ideas, from Linux and GUIs to open source and even the Be operating system (it was written in the 1990’s and is very much of its time). Want to think about these ideas in the context of bizarre metaphors or a comparison to the Epic of Gilgamesh? Stephenson is your guy. This expanded my mind as to what computing is and what it can mean (the image of a demiurge using a command line to generate our universe has long stuck with me).
In the Beginning… Was the Command Line: Amazon / Wikipedia
Chaim Gingold worked with Will Wright while at Maxis and has thought a lot about the history of SimCity. And when I mean history, I don’t just mean the way that Maxis came about and how SimCity was created and published, though there’s that too; I mean the winding intellectual origins of SimCity: cellular automata, system dynamics, and more. SimCity and its foundation is a window into the smashing-together of so many ideas–analog computers, toys, the nature of simulation–that is indicative of the proper way to view computing: computers are weirder and far more interdisciplinary than we give them credit for and we all need to know that. Computing is a liberal art and this book takes this idea seriously.
Building SimCity: How to Put the World in a Machine: Amazon
Amazing.
Hey here’s a deep cut ref for you: in 2010 Arbesman coined the term mesofact, "facts which we tend to view as fixed, but which shift over the course of a lifetime," or too slowly for us to notice. I think we all carry around a bunch of outdated priors and that means we often don’t see what’s right in-front of us. I use this term a whole bunch in trying to think about and identity what I’m not seeing but should be.
Thank you Sam!
More posts tagged: 3-books (34).
Auto-detected kinda similar posts:
Ok spoilers ahead.
But Oedipus Rex a.k.a. Oedipus Tyrannus by Sophocles is almost 2,500 years old at this point so it’s fair game imo.
The Oedipus story in a nutshell:
Oedipus, who was secretly adopted, receives a prophecy that he will kill his dad. So to thwart fate he leaves his dad and winds up in a city with a missing king (btw killing an argumentative guy on the way). Many years after becoming the new king and marrying the widow, he discovers that the dude he long-ago killed on the road was the missing king. Uh oh. And that the missing king was actually his birth dad, prophecy fulfilled. Double uh oh. And that his now-wife is therefore his birth mom. Uh oh for the third time, wife/mom suicides, stabs out own eyes, exiles self. End.
So the Sophocles play is a re-telling of this already well-worn story, at a time when Athenian culture was oriented around annual drama competitions (it came second).
The narrative new spin on the old tale is that it’s told as a whodunnit set over a single day, sunrise to sunset.
In a combination of flashbacks and new action, Oedipus himself acts as detective solving the mystery of the old king’s murder.
We’re already well into Oedipus’ reign of Thebes at the moment of the play, so his arrival is all backstory, then it’s tragic revelation after tragic revelation as his investigations bear fruit, and–
Oedipus discovers the identity of the mysterious murderer, and it’s him.
What a twist!
I mean, this is “he was dead all along” levels of whoa, right?
So I’ve been trying to think of other whodunnits in which the detective finds out that they themselves are the killer.
I can only think of one and a half, plus one I’m not sure about?
SPOILERS
SPOILERS
SPOILERS
So there’s Fight Club (1999) which, if you see it as a whodunnit in which the protagonist is trying to catch up with Tyler Durden, they discover that yes indeed etc
A clearer fit is Angel Heart (1987) in which Mickey Rourke plays PI Harry Angel who is commissioned by Robert De Niro to dig into a murder, and well you can guess who did it by my topic, and also it turns out that De Niro is the devil.
There is also Memento (2000), maybe, because ironically I can’t remember what happened.
You would have thought that detective-catching-up-with-their-quarry-and-it’s-them would be a common twist.
But yeah, 3.5 auto-whodunnits in 2.5 thousand years is not so many.
There must be more?
In literature:
I can’t think of any Agatha Christies that do this but admittedly I’ve not read too many.
There’s a sci-fi time-loop element to the auto-whodunnit - the investigating time traveller from the future turns out to be the instigator in the past - but although the concept feels tantalisingly familiar, no specific stories come to mind.
I enjoy a Straussian reading and I would like to dowse the hidden, esoteric meaning of Oedipus, Angel Heart and the rest. What is the meaning behind the story?
Freud has his famous interpretation of course but although I am taken with his take on Medusa I don’t think he goes deep enough with Oedipus.
BECAUSE:
My go-to razor for deciphering creative works is that all creative works are fundamentally about the act of creation (2017).
That’s true of Star Wars (the Force is narrative), Blade Runner (the replicants are actors), Hamlet (Shakespeare himself played the ghost), and in a follow-up post I added Groundhog Day (the experience of script iteration) and 1984 (the real omniscient Big Brother is the reader).
Many such cases, as they say.
I call it the Narcissist Creator Razor. They can’t help themselves, those creators, it’s all they know.
So I believe that Oedipus Tyrannus, the original auto-whodunnit, is the ur-exemplar of this razor: what Oedipus tells us is that we can search and search and search for the meaning of a story, and search some more, and ultimately what we’ll find is ourselves, searching.
(Even as an author, part of what you do is try to fully understand what you’re saying in your own creation, so both author and reader are engaged in working to interpret the work.)
i.e. when you interpret Oedipus, you learn that what Oedipus is really about is the act of trying to interpret what Oedipus is really about.
Which makes you want to stab your eyes out, perhaps.
Honestly I’m wasted in the technology world, I should be a philosopher working to understand the nature of reality working to understand itself over an overflowing ashtray in a smoke-filled cafe in 50s Paris.
More posts tagged: inner-and-outer-realities (6), the-ancient-world-is-now (16).
When you sit with friends at a wobbly table,
Simply rotate till it becomes stable.
No need to find a wedge for one of its four feet.
Math will ensure nothing spills while you eat.
The Wobbly Table Theorem (Department of Mathematics, Harvard University).
David Ogilvy changed advertising in 1951.
Shirts sold. Job done.
He used a surprise black eyepatch in the magazine spot:
“story appeal” makes consumers think a lot.
History of advertising: No 110: The Hathaway man’s eyepatch (Campaign, 2014).
Frogs live in ponds.
These massive ones too.
But they dig their own ponds
when nothing else will do.
The world’s biggest frogs build their own ponds (Science, 2019).
Rhyming poems have been going away,
from 70% in 1900 to almost zero today.
You know, I feel like we should all be doing our bit
to reverse the decline. But my poems are terrible.
Can you tell AI from human genius? (James Marriott).
More posts tagged: filtered-for (119).
I’m on my hols. Some recommendations.
Watch The Ballad of Wallis Island.
Charming, poignant comedy about a washed-up folk musician and loss. By Tim Key and Tom Basden.
Now I knew Basden can write - the first episode of Party is the tightest wittiest 25 minutes of ensemble radio you’ll hear - and I love everything Tim Key does as a comedian. But Key really is the revelation. Who knew he could act like that.
Watch on streaming and then listen to the soundtrack.
Play A Short Hike (I played it on Switch).
Indie video game about a cartoon bird hiking and climbing. A play-through will take you about 3 hours.
It’s cute and gentle and fun with a dozen subplots, and by the time I achieved the ostensible goal of the game I had forgotten what the purpose was and it totally took me by surprise. (Which made me cry, for personal reasons, another surprise.)
Also my kid just played this, her first self-guided video game experience. A Short Hike is deftly designed to nudge you forward through lo-fi NPC interactions, and invisibly gates levels of difficulty using geography.
Once you’ve played, watch Adam Robinson-Yu discussing A Short Hike’s design (GDC, 2020).
New daily puzzle: Clues by Sam.
A logic game that’ll take you 10 minutes each day. Follow the clues to figure out who is guilty and who is innocent. It’s easiest on Mondays so maybe begin then.
Meanwhile I’m running woodland trails in Ibiza and the scent of wild rosemary and sage fills the air in the morning. Right now I’m on a cove beach listening to the surf and the others are variously exploring, snacking and sunbathing. See you on the other side.
A couple of weeks ago I started a fundraiser for the Greater Chicago Food Depository: get Logic for Programmers 50% off and all the royalties will go to charity.1 Since then, we've raised a bit over $1600. Y'all are great!
The fundraiser is going on until the end of November, so you still have one more week to get the book real cheap.
I feel a bit weird about doing two newsletter adverts without raw content, so here's a teaser from a old project I really need to get back to. Notes on structured concurrency argues that old languages had a "old-testament fire-and-brimstone goto" that could send control flow anywhere, like from the body of one function into the body of another function. This "wild goto", the article claims, what Dijkstra was railing against in Go To Statement Considered Harmful, and that modern goto statements are much more limited, "tame" if you will, and wouldn't invoke Dijkstra's ire.
I've shared this historical fact about Dijkstra many times, but recently two separate people have told me it doesn't makes sense: Dijkstra used ALGOL-60, which already had tame gotos. All of the problems he raises with goto hold even for tame ones, none are exclusive to wild gotos. So
This got me looking to see which languages, if any, ever had the wild goto. I define this as any goto which lets you jump from outside to into a loop or function scope. Turns out, FORTRAN had tame gotos from the start, BASIC has wild gotos, and COBOL is a nonsense language intentionally designed to horrify me. I mean, look at this:

The COBOL ALTER statement changes a goto's target at runtime.
(Early COBOL has tame gotos but only on a technicality: there are no nested scopes in COBOL so no jumping from outside and into a nested scope.)
Anyway I need to write up the full story (and complain about COBOL more) but this is pretty neat! Reminder, fundraiser here. Let's get it to 2k.
Royalties are 80% so if you already have the book you get a bit more bang for your buck by donating to the GCFD directly ↩
From now until the end of the month, you can get Logic for Programmers at half price with the coupon feedchicago. All royalties from that coupon will go to the Greater Chicago Food Depository. Thank you!
Hi everyone,
I've been getting burnt out on writing a weekly software essay. It's gone from taking me an afternoon to write a post to taking two or three days, and that's made it really difficult to get other writing done. That, plus some short-term work and life priorities, means now feels like a good time for a break.
So I'm taking off from Computer Things for the rest of the year. There might be some announcements and/or one or two short newsletters in the meantime but I won't be attempting a weekly cadence until 2026.
Thanks again for reading!
Hillel
A while back my friend Pablo Meier was reviewing some 2024 videogames and wrote this:
I feel like some artists, if they didn't exist, would have the resulting void filled in by someone similar (e.g. if Katy Perry didn't exist, someone like her would have). But others don't have successful imitators or comparisons (thinking Jackie Chan, or Weird Al): they are irreplaceable.
He was using it to describe auteurs but I see this as a property of opportunity, in that "replaceable" artists are those who work in bigger markets. Katy Perry's market is large, visible and obviously (but not easily) exploitable, so there are a lot of people who'd compete in her niche. Weird Al's market is unclear: while there were successful parody songs in the past, it wasn't clear there was enough opportunity there to support a superstar.
I think that modal editing is in the latter category. Vim is now very popular and has spawned numerous successors. But its key feature, modes, is not obviously-beneficial, to the point that if Bill Joy didn't make vi (vim's direct predecessor) fifty years ago I don't think we'd have any modal editors today.
In a non-modal editor, pressing the "u" key adds a "u" to your text, as you'd expect. In a modal editor, pressing "u" does something different depending on the "mode" you are in. In Vim's default "normal" mode, "u" undoes the last change to the text, while in the "visual" mode it lowercases all selected text. It only inserts the character in "insert" mode. All other keys, as well as chorded shortcuts (ctrl-x), work the same way.
The clearest benefit to this is you can densely pack the keyboard with advanced commands. The standard US keyboard has 48ish keys dedicated to inserting characters. With the ctrl and shift modifiers that becomes at least ~150 extra shortcuts for each other mode. This is also what IMO "spiritually" distinguishes modal editing from contextual shortcuts. Even if a unimodal editor lets you change a keyboard shortcut's behavior based on languages or focused panel, without global user-controlled modes it simply can't achieve that density of shortcuts.
Now while modal editing today is widely beloved (the Vim plugin for VSCode has at least eight million downloads), I suspect it was "carried" by the popularity of vi, as opposed to driving vi's popularity.
Pre-vi editors weren't modal. Some, like EDT/KED, used chorded commands, while others like ed or TECO basically REPLs for text-editing DSLs. Both of these ideas widely reappear in modern editors.
As far as I can tell, the first modal editor was Butler Lampson's Bravo in 1974. Bill Joy admits he used it for inspiration:
A lot of the ideas for the screen editing mode were stolen from a Bravo manual I surreptitiously looked at and copied. Dot is really the double-escape from Bravo, the redo command. Most of the stuff was stolen.
Bill Joy probably took the idea because he was working on dumb terminals that were slow to register keystrokes, which put pressure to minimize the number needed for complex operations.
Why did Bravo have modal editing? Looking at the Alto handbook, I get the impression that Xerox was trying to figure out the best mouse and GUI workflows. Bravo was an experiment with modes, one hand on the mouse and one issuing commands on the keyboard. Other experiments included context menus (the Markup program) and toolbars (Draw).
Xerox very quickly decided against modes, as the successors Gypsy and BravoX were modeless. Commands originally assigned to English letters were moved to graphical menus, special keys, and chords.
It seems to me that modes started as an unsuccessful experiment deal with a specific constraint and then later successfully adopted to deal with a different constraint. It was a specialized feature as opposed to a generally useful feature like chords.
While vi was popular at Bill Joy's coworkers, he doesn't attribute its success to its features:
I think the wonderful thing about vi is that it has such a good market share because we gave it away. Everybody has it now. So it actually had a chance to become part of what is perceived as basic UNIX. EMACS is a nice editor too, but because it costs hundreds of dollars, there will always be people who won't buy it.
Vi was distributed for free with the popular BSD Unix and was standardized in POSIX Issue 2, meaning all Unix OSes had to have vi. That arguably is what made it popular, and why so many people ended up learning a modal editor.
I think by the 90s, people started believing that modal editing was a Good Idea, if not an obvious one. That's why we see direct descendants of vi, most famously vim. It's also why extensible editors like Emacs and VSCode have vim-mode extensions, but these are but these are always simple emulation layers on top of a unimodal baselines. This was good for getting people used to the vim keybindings (I learned on Kile) but it means people weren't really doing anything with modal editing. It was always "The Vim Gimmick".
Modes also didn't take off anywhere else. There's no modal word processor, spreadsheet editor, or email client.1 Visidata is an extremely cool modal data exploration tool but it's pretty niche. Firefox used to have vimperator (which was inspired by Vim) but that's defunct now. Modal software means modal editing which means vi.
This has been changing a little, though! Nowadays we do see new modal text editors, like kakoune and Helix, that don't just try to emulate vi but do entirely new things. These were made, though, in response to perceived shortcomings in vi's editing model. I think they are still classifiable as descendants. If vi never existed, would the developers of kak and helix have still made modal editors, or would they have explored different ideas?
Not too related to the overall picture, but a gripe of mine. Vi and vim have a set of hardcoded modes, and adding an entirely new mode is impossible. Like if a plugin (like vim's default netrw) adds a file explorer it should be able to add a filesystem mode, right? But it can't, so instead it waits for you to open the filesystem and then adds 60 new mappings to normal mode. There's no way to properly add a "filesystem" mode, a "diff" mode, a "git" mode, etc, so plugin developers have to mimic them.
I don't think people see this as a problem, though! Neovim, which aims to fix all of the baggage in vim's legacy, didn't consider creating modes an important feature. Kak and Helix, which reimagine modal editing from from the ground up, don't support creating modes either.2 People aren't clamouring for new modes!
So far I've been trying to show that vi is, in Pablo's words, "irreplaceable". Editors weren't doing modal editing before Bravo, and even after vi became incredibly popular, unrelated editors did not adapt modal editing. At most, they got a vi emulation layer. Kak and helix complicate this story but I don't think they refute it; they appear much later and arguably count as descendants (so are related).
I think the best explanation is that in a vacuum modal editing sounds like a bad idea. The mode is global state that users always have to know, which makes it dangerous. To use new modes well you have to memorize all of the keybindings, which makes it difficult. Modal editing has a brutal skill floor before it becomes more efficient than a unimodal, chorded editor like VSCode.
That's why it originally appears in very specific circumstances, as early experiments in mouse UX and as a way of dealing with modem latencies. The fact we have vim today is a historical accident.
And I'm glad for it! You can pry Neovim from my cold dead hands, you monsters.
My talk, "Designing Low-Latency Systems with TLA+", is happening 10/23 at 11:40 central time. Tickets are free, the conf is online, and the talk's only 16 minutes, so come check it out!
I guess if you squint gmail kinda counts but it's basically an antifeature ↩
It looks like Helix supports creating minor modes, but these are only active for one keystroke, making them akin to a better, more ergonomic version of vim multikey mappings. ↩

Beste Zeussers en Zeusinnen,
Enkele weken geleden werden we gecontacteerd door de onderzoeksgroep HortiRoot. In hun lab hebben ze een redelijk aantal Epson flatbed foto scanners omgebouwd om er kleine plantjes in petriplaten op te kunnen groeien. Hiermee kunnen ze met korte tijdsintervallen scans nemen, om timelapses te maken van de groei in hoge resolutie.

De UGent workshop WE62 heeft hun geholpen met de hardware mods:
Voor de software merkten de onderzoekers echter dat de UGent geen equivalent van die werkplaats heeft.
De onderzoekers van HortiRoot hebben zelf wat geprobeerd om de standaard Windows 10 Epson software te automatiseren met behulp van pyautogui.
Uiteindelijk hadden ze wel een werkend systeem, dat momenteel nog steeds gebruikt wordt.
Onderhoudbaar is dit helemaal niet: automatisch de muis proberen bedienen schaalt niet bepaald goed…
Ze hadden ook de software voor Ubuntu uitgeprobeerd, maar die avonturen waren jammer genoeg van korte duur omdat DICT geen ondersteundend personeel had met voldoende expertise hiermee.
Nu recent stak de verplichte update naar Windows 11 ook een spaak in het wiel.
Na wat details van het project te horen wees DICT de onderzoeksgroep naar ons door. Even later werden Xander, Hannes en ik hartelijk ontvangen op campus Coupure. In een labo vol groen konden wij de verschillende iteraties van de scanner mods aanschouwen. Maar vooral van de vele hacks die ze gevonden hadden om toch maar de software te doen werken, waren we onder de indruk. Zo lukte het niet om de scan software te gebruiken met meerdere apparaten aangesloten, dus hadden ze als oplossing om elke scanner op een aparte virtuele machine aan te sluiten. Niet ideaal, dus probeerden ze om maar 1 apparaat tegelijk te tonen aan de software met behulp van programmeerbare USB hubs. Dit werkte niet meer in Windows 11, maar zelfs dan was dit vrij beperkt in aantal apparaten. Ook hadden ze ontdekt dat ze doorheen dinsdagnacht geen scans konden doen, omdat DICT patch tuesdays doet…
Na uitwisselen van ons intern telefoonnummer, werden even later 2 van de scanners geleverd aan de Zeuskelder.
Er zijn hier al wat mensen bezig geweest met de scanners om te helpen de opties te verkennen.
Wat ons meteen verrastte is dat de Linux software epsonscan2 die Epson publiceert, bijna volledig open-source en vrij is.
Bovendien is dit een grote bron aan documentatie van de interne werking van de scanners. De enige drempel is dat die deels in het Japans is 😅.
Om deze software inderdaad werkend te krijgen, is een ander verhaal.
Wanneer de software probeerde te verbinden gingen de scanners steeds in een “internal error” state.
Door wat instrumentatie aan esponscan2 toe voegen en veel verschillende opstellingen te proberen, vonden we uiteindelijk een oplossing.
We merkten dat een scanner slechts éénmaal correct geïnitializeerd moest worden door andere software (zoals de windows 11 versie), waarna de scanner werkt met alle software.
Door de communicatie te onderscheppen, konden we een programma maken dat de scanners op dezelfde manier kan initializeren.
Waarom exact dit nodig is, is nog niet helemaal duidelijk.
We zijn opgelucht dat dit werkt, omdat dit betekent dat we helemaal geen nood hebben aan Windows, virtuele machines, USB multiplexers of het automatisch navigeren van menu’s. In de plaats kunnen we volledig werken met de betrouwbare en voorspelbare software die we gewend zijn.
Het plan is om scanners per ~4 te groeperen, die samen aangestuurd worden door 1 node (Raspberry Pi). Deze nodes worden ontdekt op het lokale netwerk door de backend van een webapplicatie. Die webapplicatie stuurt dan requests uit om scans te nemen met bepaalde parameters, op regelmatige intervallen. De geproduceerde afbeeldingen worden dan verplaatst van de node naar de machine waar de webapp draait. Gebruikers kunnen dan die timelapses beheren vanuit de webapp op het lokaal netwerk, en eveneens ook de afbeeldingen downloaden.
Wat is reeds geïmplementeerd?
USB_BULK pakketten te sturen, onder andere met de firmware blob.sc) is in D geschreven, zogoed als alles wordt gedaan via de libusb C library.libusb het apparaat detecteert.
Als root uitvoeren fixt het…USB_BULK is ESC/I.
Enkele links naar documentatie hierover zijn hier te vinden.epsonscan2 software build vanuit de git repo met behulp van docker.
/opt/epsonscan2 te doen wijzen.
Deze prefix is gemakkelijker te gebruiken dan .deb packages maken die naar /usr uitpakken.Kijk zeker eens in de git repo en join al vast het ~horitroot-scanners kanaal op Mattermost!
PS: Wat zouden wij kunnen scannen?


These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
For the second meeting of the Ljubljana RNA Salon in 2025/2026, we will have a invited lecture given by Dr. Marcus Jahnel from TU Dresden, Germany. This will be followed by a pre-Christmas gathering.
The RNA Salon will take place at the National Institute of Chemistry.

These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
In this talk I will consider models connecting baryogenesis with first order phase transition. In particular, I will elaborate on scenarios where the required amount of CP-violation originates from the production of heavy particles after scalar shells interpolating between true and false vacuum collide. I will focus on the case when both heavy particles are produced on-shell as well as the opposite regime when one of the heavy particles is produced off-shell, subsequently decaying to light Standard Model states. I will elaborate on the phenomenological implications of both scenarios.
Welcome to the registration form of the ERA KR21 Conference 2025 for members of the KR21 Regional Alliance.
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
This 1.5-day organized workshop within Adriatic Edition of Bioexcel offers a hands-on introduction to modern computational tools and workflows in biomolecular simulations. Participants (primarily from Italy, Croatia and Slovenia) will explore key software frameworks such as BioExcel Building Blocks (BioBB) for setting up reproducible molecular dynamics workflows, and gain practical experience with protein MD setup and automatic ligand parameterization. Further two topics of the workshop cover integrative modelling with HADDOCK, including a tutorial on antibody–antigen complex modelling and alchemical free energy calculations, followed by a PMX-based practical on ligand modifications.
Through a combination of lectures and guided tutorials, the workshop provides both theoretical grounding and practical skills best suited for PhD students and young researchers in computational biology and molecular modelling.
Prerequisites:
Date: 11-12 February 2026
Location: Ljubljana, Slovenia
Venue: National Institute of Chemistry, Ljubljana, Slovenia
Room: Grand Lecture Hall, National Institute of Chemistry (Map)
Format: On-site
Fee: 70€ (to be paid after application is accepted; fee includes workshop participation, three coffee breaks and two lunches)
Topics: see Contribution list
Participation: 40 applicants will be selected for on-site participation. Applicants from Croatia, Italy and Slovenia will be prioritized for on-site participation. We can only accept participants from EuroHPC Joint Undertaking member institutions. In our selection, we will account for geographical and gender distribution. A letter of motivation needs to be handed in when registering (see form). Participants will need to bring a laptop for participating for the tutorial session and refresh their Linux skills. We recommend reviewing this introduction to the UNIX shell.
Registration starts: 5 November 2025
Registration ends: 30 November 2025
Acceptance: 10 December 2025
Payment deadline: 10 February 2026





These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
Delavnice »International Masterclasses« srednješolcem ponujajo edinstveno priložnost, da sami vstopijo v svet kvarkov in leptonov. Udeleženci bodo izvajali meritve na pravih podatkih, izmerjenih pri eksperimentih v CERN-u in drugih raziskovalnih centrih po svetu, se srečali z raziskovalci ter izmenjali rezultate in izkušnje s sovrstniki iz drugih držav.
Na enodnevnem dogodku, ki vključuje predstavitve in praktično delavnico, bodo dijaki z uporabo podatkov iz detektorjev ATLAS in Belle II spoznavali osnovne delce in sile med njimi ter se srečali z uporabo osnovnih delcev v sodobni medicini, zlasti pri hadronski radioterapiji.
Dogodek bo potekal cel dan na Institutu Jožef Stefan v Ljubljani. Dopoldne bodo raziskovalci z Instituta Jožef Stefan, Fakultete za matematiko in fiziko Univerze v Ljubljani ter Fakultete za kemijo in kemijsko tehnologijo Univerze v Mariboru predstavili fiziko osnovnih delcev, medicinsko fiziko in detektorje, ki jih uporabljajo pri svojem raziskovalnem delu.
Udeleženci se bodo na virtualnem sprehodu sprehodili po notranjosti detektorja Belle II, vmes pa bo obilo priložnosti za pogovor z raziskovalci o njihovem delu in življenju v CERN-u in na Japonskem. Spoznali bodo tudi, kako raziskave zgradbe vesolja prispevajo k napredku v medicinskih tehnologijah. Jutranji del se bo zaključil s kratko predstavitvijo enega izmed naših industrijskih partnerjev.
Kosilo bo organizirano v menzi Instituta Jožef Stefan (doplačilo približno 7 EUR, plačilo udeleženci poravnajo sami).
V popoldanskem delu delavnice se bodo dijaki lotili iskanja neznanih, kratkoživih delcev z analizo podatkov iz detektorjev ATLAS in Belle II. Ogledali si bodo simulacijo radioterapije ter izbrali obsevalni načrt, ki kar najbolje zavira rast tumorja in hkrati ščiti zdravo tkivo. Na koncu se bodo povezali z dijaki iz drugih raziskovalnih središč po svetu in se prek videokonference vključili v neposreden pogovor s kontrolno sobo eksperimentov.
Vabimo te, da se nam pridružiš na poučnem in zabavnem dnevu fizike ter doživiš, kako poteka pravo raziskovalno delo!
Delavnica bo potekala v živo na Institutu Jožef Stefan.
Za prijavo se registriraj prek zavihka na levi ali s pomočjo spodnje QR kode.
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
These are HGTD-IJS group meeting pages
https://cern.zoom.us/j/96207798641?pwd=STczWTN5YjFYTmlITFB3bEdPTE1MUT09
Italijanske delavke in delavci so ta konec tedna ponovno rekli ne oboroževanju in vojni. V petek je namreč potekala splošna stavka več sindikalnih central “di base” (sindikatov, organiziranih od spodaj), ki so za 24 ur ustavili delo v več sektorjih – od javnega prometa, šolstva in zdravstva do posameznih podjetij.
Delavke in delavci stavkajo, ker nasprotujejo zvišanju sredstev za obrambo, ki ga je vlada predvidela v osnutku proračuna za prihodnje leto. Gre za povsem zgrešene prioritete: v času vse večje draginje, ki jo še dodatno zaostruje podnebna kriza, želijo vladajoči sloji še več denarja nameniti za orožje. Medtem so javne storitve, ki prinašajo resnično varnost – šolstvo, zdravstvo in sociala – že desetletja finančno podhranjene.
Prvič po letu 2010, ko je vlado vodil Silvio Berlusconi, stavkajo tudi italijanski novinarji, ki zahtevajo novo kolektivno pogodbo, na katero čakajo že od leta 2016. Zahtevajo, da nova kolektivna pogodba vključuje nove poklicne profile, zahtevajo dostojne plače in delovne pogoje tako za novozaposlene novinarje kot za zunanje sodelavce. Število zaposlenih novinarjev je od leta 2011 padlo z 19.000 na 13.000, saj se v zadnjih letih vedno več novinarjev najema kot prekarce, da bi tako dodatno znižali stroške.
V večjih mestih so stavko spremljale tudi manifestacije. V Genovi so se kolektivu borbenih pristaniščnikov pridružili tudi podnebna in propalestinska aktivistka Greta Thunberg, posebna poročevalka Združenih narodov za stanje človekovih pravic na okupiranem palestinskem ozemlju Francesca Albanese in nekdanji grški finančni minister Yanis Varoufakis.
V nam bližnjih Benetkah se je več kot 3000 stavkajočih delavcev in lokalnih aktivistov zbralo pred orožarskim podjetjem Leonardi in za več ur ohromili dostop do njega. Več kot tri tisoč ljudi je zablokiralo eno osrednjih vpadnic v mesto, da bi izrazili jasno nasprotovanje temu, da se v njihovem lokalnem okolju proizvaja orožje, s katerim Izrael izvaja genocid nad Palestinkami in Palestinci.
V soboto je v Rimu potekala še skupna protivladna in protivojna manifestacija, s katero so italijanske delavke in delavci ponovno dokazali, da ne mislijo zgolj nemo spremljati, kako se njihov denar namenja za oboroževanje, medtem ko sami vse težje pridejo skozi mesec.
Sledimo njihovem zgledu, organizirajmo se!
Foto: @ramiasolefoto
The post NE OBOROŽEVANJU IN VOJNI! first appeared on Rdeča Pesa.
“Če moram umreti,
moraš živeti,
da poveš mojo zgodbo …”
To so verzi, ki jih je zapisal pesnik iz Gaze Rifat Alarira decembra 2023, en teden preden ga je umorila izraelska vojska. Recitacijo te pesmi je letos poleti posnela tudi palestinska deklica Sara skupaj z ostalimi vrstniki, ki bi morala nastopiti v Ljubljani. A jo je dan po objavi recitacije umorila izraelska vojska. Z željo, da Sara, Rifat in ostali Palestinci in Palestinke niso umrli zaman, smo člani Rdeče pese skupaj z društvom Vagant, Fawzijem in Darinko Abder Rahim v sredini novembra v Mariboru organizirali večer palestinske poezije.
Osrednji gost večera je bil palestinski pesnik Fawzi Abder Rahim, rojen v vasi Beit Jeez leta 1945. Med nakbo je bil skupaj z družino izgnan, odraščal je kot begunec – najprej v šotorih, nato v hišah iz blata na Zahodnem bregu. Po šestdnevni vojni je prek Gibanja neuvrščenih prišel študirat v Jugoslavijo, v Maribor in Ljubljano. Tu si je ustvaril družino, dom in se neučil jezik – a hrepenenje po domovini je ostalo. Skupaj s soprogo Darinko je prevedel svoje pesmi v slovenščino in leta 2021 izdal dvojezično zbirko Do kdaj, v kateri se sprašuje, kako dolgo bodo Palestinci še živeli v peklu okupacije.
Prisluhnili smo njegovim pesmim Genocid, Bolečina poti, Mit vzdržljivosti in Rov k svobodi, v katerih opisuje otroke pod ruševinami, porušene domove, izruvane oljke, a tudi neomajno voljo do življenja in svobode. Pesmi so prebrale Zlatka Rashid, Maja Pan, Darinka Abder Rahim in Ana Lah, ki je celoten dogodek tudi povezovala. Pesniški Večer sta kulturno obogatila glasbenika Veronika Kašman in Lan Portir.
Sprehodili smo se do izvora sionizma, ko je bila Palestina v očeh evropskih velesil »dežela brez ljudi«, čeprav je bila gosto poseljena in del Osmanskega cesarstva. Balfourjeva deklaracija, s katero je Velika Britanija »nekemu narodu obljubila deželo tretjega naroda«, je odprla pot genocidu in prisilnim razselitvam.
Ko so sionisti leta 1948 s terorjem ustanovili Izrael, kar je bila za Palestince nakba – katastrofa palestinskega ljudstva – Palestinci nanjo niso bili pripravljeni. Obešanja, bombardiranja vasi, kolektivno kaznovanje, rušenje domov, prisilni beg v taborišča in izgnanstvo: vse to je postalo zgodovinski horizont, temelji, na katerih je nastala sodobna palestinska poezija.
Pesnica Fadwa Toukan, rojena tik pred sprejetjem Balfourjeve deklaracije, je odraščala v tradicionalni, patriarhalni družbi. Čeprav iz premožne družine, je bila kot ženska ujeta med štiri stene doma, zato je po osnovni šoli znanje pridobivala sama, ob podpori brata Ibrahima, avtorja stare palestinske himne. Sčasoma je njena poezija postajala čedalje bolj uporniška – po nakbi se je posvetila trpljenju beguncev in položaju žensk v patriarhalni družbi. Njeno pesem Planet Zemlja sta v arabščini in slovenščini prebrala Fawzi Abder Rahim in Jerneja Breznik, v njej pa si pesnica zamišlja svet, v katerem bi Zemlja bila njena – da bi lahko izruvala korenine sovraštva in posušila reke krvi.
Muin Bseiso, rojen v Gazi leta 1927, je bil pesnik in novinar, ki je svojo ustvarjalnost združil z organiziranim odporom. Zaradi sodelovanja v demonstracijah in boju za pravice je večkrat pristal v zaporu. Pesem Izziv, ki jo je prebral Gregor Kašman, govori o tem, da se ne boji verig in vislic: “Pihajte kolikor hočete, naših bakel ne boste ugasnili.”
Taufiq Zijad je bil pesnik, komunistični politik in dolgoletni župan Nazareta. Večkrat zaprt zaradi političnega delovanja, se v pesmi Besede (prebrala jo je Eva Ribič) odpove vsemu, razen poeziji, zemlji, nebu in preprostim ljudem. Taufiq se klanja delavkam in delavcem ter njihovim rokam, ki gradijo svet.
Samih AlQasim, rojen v druški družini, je zaradi zavračanja služenja v izraelski vojski in političnega delovanja, več let preživel v hišnem priporu in zaporih. Napisal je 87 knjig, rdeča nit njegovega dela je revolucija. V pesmi Pismo iz pripora, ki jo je prebrala Urška Breznik, piše iz tesne celice, brez papirja in svinčnika. Kljub skrajnemu neudobju išče pot, kako sporočilo posredovati v svet.
Najbolj znan palestinski pesnik Mahmud Darwish, ki je bil v času nakbe star sedem let, je moral zaradi sionistov s svojo družino zapustiti rojstno vas. Prisluhnili smo njegovi pesmi Nekdo, ki jo je prebrala Isidora Popović. “Nero je umrl, a Rim še stoji. Naprej bije bitko življenja. Zrno žitnega klasa se suši a zasejalo bo nova žitna polja,” so sklepni verzi pesmi.
Palestinske pesmi so, ne glede na to, kdaj so nastale, aktualne še danes. Razlika je le ta, da lahko danes grozote spremljamo v živo, na svojih telefonih. Laži sionistov so javne, objavili so jih sami.
Če je svetovni eliti vseeno za življenja Palestincev, ker so njihovi apetiti po moči, nadzoru in bogastvu preveliki, pa ni vseeno nam, ljudem. Palestinke in Palestinci se neomajno borijo še naprej, mi pa skupaj z njimi. Vse do svobodne Palestine. Kot je zapisal Fawzi v pesmi Rov k svobodi: “Korenine tisočletnih oljk še živijo, močnejše so od orožja./../ V rov zasije svetloba rešitve, kljub gorečemu ognju.”
Foto: Sami Rahim
The post Z VERZI PO POTEH OKUPACIJE PALESTINE first appeared on Rdeča Pesa.
V nedavnem referendumu o prostovoljnem končanju življenja lahko vidimo poskus, da bi končno izpolnili program francoske revolucije. V časih pred buržoazno revolucijo so v Evropi cerkve imele monopol nad potekom življenja. Rojstva, poroke, smrt, glavne življenjske postaje, je cerkev obvladovala s svojimi zakramenti. Buržoazna država je sčasoma prevzela register rojstev in poroke, s podržavljenjem nadzora nad prebivalstvom je ljudi osvobodila cerkvenega gospostva. S svobodnim odločanjem o porajanju otrok je zagotovila svobodno upravljanje z življenjem – od rojstva do smrti. Ne pa o smrti sami! Referendum 23. novembra bi nas lahko osvobodil tudi na tej točki, na zadnjem oporišču cerkvene moči. Pa nas ni!
Zakaj se državljanke in državljani niso hoteli otresti zadnjega srednjeveškega jarma? Razlogi za glasovanje »proti« so bili različni, nas poučujejo strokovnjaki. Ideologija »svetosti življenja« in pokorščina papeški cerkvi sta očitna razloga. A za nas pomembnejši je razlog tistih, ki so menili, da je zakon slab. Zakon je res močno zapletel prostovoljno končanje življenja. Bolj ko zapletajo pravne postopke, večja je verjetnost, da bodo povečevali tudi možnosti za izigravanje zakona, množili »pravne luknje«.
Nasploh zmore buržoazna pravna država »osvoboditi« posameznice in posameznike samo s pomočjo »pravne države«, tj. pravnega fetišizma. To pa ni emancipacija človeka. Kritiko francoske revolucije je objavil Karl Marx že leta 1844 v Prispevku k judovskemu vprašanju. Francoska revolucija, je napisal, osvobodi individua zgolj politično in s tem razbije družbo na atomizirane individue, ki jih povezuje samo še pravo.
Preprosto rečeno: buržoazna revolucija uvede boj vseh proti vsem v mejah zakonov buržoazne pravne države. Med drugim je to tudi temelj za »svobodno« delovno pogodbo med proletarko, proletarcem in kapitalistom – ta pogodba pa je osnova za kapitalistično izkoriščanje. Buržoazni pravni fetišizem ne zagotavlja svobodnega sožitja v solidarni družbi.
»Proti« je bržkone glasoval tudi marsikdo, ki je hotel s tem protestirati proti politiki sedanje vlade. A proti vladi je lahko glasoval spet iz različnih razlogov: npr. zato ker ni vzpostavila trdnega javnega zdravstva – ali pa zato ker ni zdravstva dokončno sprivatizirala. Torej iz popolnoma nasprotnih razlogov.
Iz tega, da različni, celo nasprotujoči si razlogi pripeljejo do enake odločitve na referendumu, lahko razberemo splošno omejenost buržoaznega političnega sistema. Zakon je pripravila stranka, podprle so ga koalicijske stranke, sprejel ga je strankarski parlament. Med pripravo zakona so v javnosti nekoliko, sicer ne preveč zavzeto, razpravljali o vprašanju prostovoljnega končanja življenja. A koliko so misli iz razprave upoštevali pri pisanju zakona, so odločale stranke.
Pred referendumom je bila razprava o zakonu resda živahna – a ni več mogla vplivati na zakon. Idej, ki so jih predstavili v razpravi pred referendumom, ni bilo mogoče ustvarjalno uporabiti. Ujete so bile v skopo izbiro za zakon ali proti zakonu, kakršnega so določile stranke. Strankarska demokracija ne zmore izkoristiti vseh intelektualnih moči družbe. Tudi na volitvah se lahko odločamo zgolj za strankarske liste ali proti njim. Navsezadnje se odločimo za najmanj slabo možnost. Buržoazna demokracija ni demokratična.
Zato smo spet pred starim vprašanjem: je treba najprej do konca izpeljati buržoazno revolucijo (človekove pravice, pravna država, buržoazni parlamentarizem) – ali je na dnevnem redu socialistična revolucija in se moramo bojevati za odpravo razredov in izkoriščanja, za podružbljenje produkcijskih sredstev in odločanja, za solidarno sožitje?
V kapitalizmu ni mogoče uresničiti obljub buržoazne revolucije. Brez neplačanega dela v gospodinjstvu bi se kapitalizem sesul. V vseh obdobjih je potreboval nesvobodno delo – od »tradicionalnih« odnosov v kolonijah do suženjstva na ameriškem jugu in migrantskega dela zdaj.
Zato bi s popravljanjem kapitalizma samo cepetali v zgodovinski slepi ulici. Na dnevnem redu je socialistična revolucija. Še zlasti za nas, ki smo jo še nedavno prakticirali.
Gostujoče pero je napisal Rastko Močnik
GOSTUJOČI PRISPEVEK // RP je odprta platforma in omogoča objavo prispevkov avtoric, ki se dotikajo naprednih bojev ali vprašanj
FOTO: Žiga Živulovič jr./Bobo
The post PUSTIMO BURŽOAZNO REVOLUCIJO V PRETEKLOSTI, NA DNEVNEM REDU JE SOCIALISTIČNA! first appeared on Rdeča Pesa.
Direktor medijskega urada vlade v Gazi, dr. Ismail al-Thawabta je konec oktobra sporočil, da naj bi Izraelske obrambne sile (IDF) odstranjevale organe iz trupel Palestincev, ter pozval k takojšnji mednarodni preiskavi, »ki bi Izrael privedla pred odgovornost za hude kršitve nad telesi mučenikov in krajo njihovih organov«. Številna od 120 trupel, ki jih je Izrael preko Rdečega križa vrnil palestinskim oblastem ob začetku izvajanja premirja, so bila izmaličena. Mnogim so manjkali deli telesa, kot so polži v notranjem ušesu, roženice, jetra. Trupla so sicer kazala znake davljenja, zlomov, opeklin, globokih ran, je poročal The Guardian.
Čeprav se takšne obtožbe na prvi pogled zdijo senzacionalne ali neverjetne in jih izraelske oblasti zanikajo, niso nove. Palestinske oblasti in številne organizacije za človekove pravice že desetletja navajajo primere, v katerih so izraelski zdravniki kradli organe Palestincem in Palestinkam – bodisi za pridobitne namene, bodisi za presaditve ali raziskave.
Leta 2009 je izraelski patolog, vodja izraelskega forenzičnega inštituta v znanem intervjuju priznal, da je Izrael v 90-tih letih prejšnjega stoletja pridobival kožo, očesno roženico, kosti, srčne zaklopke Palestincev in tujih delavcev, pogosto brez privolitve svojcev. Opisal je, kako so mrtvim odstranili ne samo roženico, ampak celotna očesa, veke pa nato zlepili z lepilom in taka trupla vrnili svojcem. Kot so potrdili nekateri drugi kolegi s forenzičnega inštituta, je bilo v praksi najlažje krasti organe okupiranih Palestincev. Na parlamentarnih zaslišanjih, ki so se odvila po razkritju, so zaslišani povedali, da se praksa kraje organov nikoli ni končala.
Pripadniki izraelskih oblasti so leta 2014 na televiziji priznali, da so pridobivali kožo mrtvih Palestincev in afriških delavcev, to pa so uporabili za zdravljenje opeklin izraelskih vojakov. Direktor izraelske kožne banke je razkril, da imajo na zalogi več kot 17 kvadratnih metrov človeške kože.
Kraja organov Palestincev in Palestink sega že v obdobje prve intifade v koncu 80-tih let. Nekoč zaposlena na izraelskem forenzičnem inštitutu Meira Weiss je v njeni knjigi iz leta 2014 pisala, da je IDF inštitutu dovoli pridobivanje organov Palestincev na podlagi vojaške uredbe, ki je narekovala, da morajo izvesti avtopsijo vsakega ubitega Palestinca. Avtopsije so bile namenjene tudi kraji organov. Kot je zapisala v knjigi “Preko njihovih mrtvih teles”, se delavci forenzičnega inštituta prve intifade spominjajo kot “zlatih dni”, ko so lahko svobodno zbirali organe.
V zadnjih desetletjih se je zvrstilo več poročil in obtožb o trgovini s človeškimi organi, v katero je vpleten Izrael. Kot je priznal sodelavec forenzičnega inštituta, so organe tudi prodajali, komurkoli, le da so dobro plačali.
Poročilo evropskega parlamenta iz leta 2015 navaja Izrael kot eno izmed osrednjih držav, vpletenih v trgovino z organi, pri čemer je Izrael velik porabnik organov. Leta 2003 je južnoafriška policija razkrila mednarodno mrežo trgovine s človeškimi organi, od katere so največ ilegalno transplantiranih organov prejeli Izraelci. Leta 2019 so aretirali Kazahstanskega zdravnika, ki je organe revnih bolnikov prodajal izraelskim bogatašem.
Izrael izkorišča Palestince v najbrutalnejši obliki. Tako je o dehumanizirajočih praksah Izraelcev nad Palestinci povedal kamerunski filozof Achille Mbembé: »Najbolj izpopolnjena oblika nekro-moči je sodobna kolonialna okupacija Palestine.« Sistematično nasilje izraelske države nad Palestinci in njen nadzor nad njimi ji dajejo skrajno oblast nad življenji Palestincev – in tudi nad njihovimi telesi po smrti.
Foto: REUTERS/Mahmoud Issa
The post IZRAELSKA VOJSKA KRADE ORGANE MRTVIH PALESTINCEV first appeared on Rdeča Pesa.
Danes z vami delimo oddajo Radia Študent o poseganju vojske in sorodnih institucij v srednješolsko in vojaško sfero. Govorili so z učiteljicami in učitelji nekaterih srednjih pol ter predstavniki ministrstva za obrambo. Kot primer tega sodelovanja so izpostavili nedavni orožarski sejem v Celju, kjer so govorili tudi z nekaterimi dijaki.
Celotno oddajo najdete na: https://radiostudent.si/univerza/unikompleks/dijasko-vojaski-kompleks
Vabljeni k poslušanju.
The post DIJAŠKO-VOJAŠKI KOMPLEKS first appeared on Rdeča Pesa.
2.959 stanovanj. Toliko domov si lahko kupita najbogatejša prebivalca Slovenije, Vesna in Dari Južna. Če k njunima 511 milijonoma evrov dodamo še premoženje ostalih devetindevetdesetih najbogatejših Slovenk in Slovencev, si lahko skupaj privoščijo 67.220 stanovanj, pri čemer uporabljamo konservativne cenitve nepremičnin. 100 milijonarjev si lahko kupi vsa stanovanja v Mariboru, na Ptuju in v Murski Soboti skupaj. Medtem, ko stotisoči delovnih ljudi plačuje oderuške najemnine, stotine jih dneve preživlja na ulicah, lastniki kapitala bogatijo in bogatijo.
Sodeč po lestvici stotih najbogatejših Slovenk in Slovencev, ki jo vsako leto pripravljajo pri Financah, se je namreč njihovo premoženje v zadnjem letu povečalo za 12 %. V zgolj 12 mesecih so svoje malhe napolnili z novimi 1200 milijoni evrov. Vsak dan lanskega leta so v povprečju pridobili 3,29 milijona evrov. Vesna in Dari Južna sta, v povprečju, vsako minuto lanskega leta svoje premoženje povečala za 315,4 evrov. Za primerjavo naj povemo, da se je minimalna plača v Sloveniji v letu 2025 povečala zgolj za 23,82 evrov bruto.
Bogatenje najpremožnejših pa gre z roko v roki s povečevanjem ekonomskih in siceršnjih družbenih neenakosti. Slovenija je v obdobju med letoma 2000 in 2023 namreč zabeležila eno najhitrejših rasti povečevanja ekonomske neenakosti med državami članicami Evropske unije. Ljudje na vrhu razredne strukture živijo vse bolj luksuzna življenja, medtem ko država krči in siromaši javne storitve, običajni delovni ljudje pa se vse težje spopadajo z vse višjimi cenami osnovnih življenjskih potrebščin.
Medtem ko si ozka družbena in gospodarska elita gradi nebesa na Zemlji, so življenja preostalih vse bolj podobna preživetvenemu peklu. Takšne družbene razmere, kjer izkoriščevalci vse bolj ekstenzivno uničujejo planet na katerem živimo in življenja vseh nas, je mogoče spremeniti. Spomnimo se samo slavnih besed nizozemske sociologinje Ursule K. Le Guin o minljivosti zgodovinskih družbenih razmerij. “Živimo v kapitalizmu. Njegova moč se zdi neizogibna. Tako kot božanska pravica kraljev. Vsaki človeški moči se lahko upremo in jo ljudje spremenimo.”
#rdečapesa
The post “LAŽJE JE KAMELI ITI SKOZI ŠIVANKINO UHO, KAKOR BOGATINU PRITI V BOŽJE KRALJESTVO” first appeared on Rdeča Pesa.
Resolucijo o Gazi, ki namesto k miru vodi k trajni okupaciji, je Varnostni svet Združenih narodov sprejel s 13 glasovi ZA in 2 vzdržanima.
“Niti en član Sveta ni imel dovolj poguma, načel ali spoštovanja do mednarodnega prava, da bi glasoval proti tej kolonialni sramoti ZDA in Izraela,” je na omrežju X zapisal nekdanji direktor newyorškega urada Visokega komisarja ZN za človekove pravice Craig Mokhiber. Predlog so zavrnili palestinska civilna družba in frakcije ter zagovorniki človekovih pravic in mednarodnega prava povsod po svetu. Mokhiber je 17. november označil za dan sramote za Združene narode in za vlade po vsem svetu, ki se uklanjajo ameriškemu imperiju in njegovim nasilnim izraelskim zaveznikom.
Resolucija ščiti državo Izrael, ki kolonizira, izvaja apartheid in genocid. Zanjo je glasovala tudi Slovenija, kar kaže na velik razmah med tem, kaj politična elita govori in kaj dela. Slovenija se je s tem postavila na stran soodgovornih pri izraelskih in ameriških zločinih proti človeštvu.
Odboru za mir, ki ga resolucija predvideva, bo predsedoval nihče drug kot predsednik s krvavimi rokami Donald Trump. Prav ZDA je namreč v največji meri odgovorna za genocid v Gazi, saj je Izraelu ves čas pošiljala na tone orožja podpirala to genocidno državo. Dejansko ne bi bilo večje razlike, če bi Trumpa nadomestil kar sam Netanjahu. Resolucija predvideva tudi razorožitev, ampak ne Izraela, ki je odgovoren za etnično čiščenje, genocid in apartheid, ampak Hamasa, ki je nastal kot direktna posledica okupacije.
Palestinci po novi resoluciji nimajo nikakršne pravice do odločanja, ves nadzor nad Gazo ima še naprej Izrael. Vse kaže torej, da bo imel vodilno vlogo pri gospodarskem razvoju Palestine glavno besedo Donald Trump, Palestina pa bo glede na še eno v vrsti krivičnih resolucij, trajno okupirana. A kot je dejal Craig Mokhiber: “Boj za palestinsko svobodo se bo nadaljeval neomajno, z njimi ali brez njih.”
The post ŠE ENA SRAMOTNA RESOLUCIJA ZA PALESTINO, SLOVENIJA SOODGOVORNA first appeared on Rdeča Pesa.
Kdo v Sloveniji najpogosteje mori? To so, na podlagi podatkov med letoma 1991 in 2016, ugotavljali raziskovalci Inštituta za kriminologijo Pravne fakultete v Ljubljani.
Praviloma so morilci moški srednjih in mlajših let, ki so v skoraj polovici primerov vinjeni. Zelo pogosto svoje žrtve poznajo, najbolj so nevarni družinskim članom in partnericam. Skoraj polovica žrtev umorov so partnerji, nekdanji partnerji, tekmeci v intimnem razmerju ali družinski člani. Tako so skoraj vse ženske žrtve žene ali partnerice morilcev, predstavljajo pa 29% vseh žrtev. Umori so v polovici primerov le vrhunci dolgotrajnega nasilja nad partnericami.
Ženskih morilk je le 9%, kadar morijo običajno ubijejo (nasilne) partnerje. 86% morilcev je slovenskih državljanov, večinoma s poklicno šolo ali še nižjo izobrazbo. Polovica vseh storilcev je brezposelnih. Večina umorov se zgodi na petek ali soboto, ko so ljudje doma in pogosteje pijejo alkohol. Število umorov je po letu 2006 padalo, med gospodarsko krizo pa spet naraslo in se po okrevanju gospodarstva spet znižalo.
Če pogledamo še podatke o samomorih vidimo, da tudi tukaj prednjačijo moški, ki storijo več kot tri četrtine samomorov. Možnost za samomor se viša s starostjo, zelo hitro se dvigne med moškimi po upokojitvi. Stopnja samomorov od leta 2004 pada, nato stagnira, manjši upad je viden tudi po koncu gospodarske krize, le v letih epidemije je spet rahlo narasla. K padcu števila samomorov poleg gospodarskega okrevanja verjetno pripomore tudi velik porast psihološkega svetovanja po letu 2015. A to še vedno ni na točki, ki bi vsakemu, ki potrebuje kvalitetno psihološko podporo, to zagotovila pravočasno in v dovolj velikem obsegu. Javna psihološka pomoč je težko dostopna, zasebna je pa za mnoge v stiski predraga.
Podatki o številu samomorov, predvsem pa umorov, kažejo na sovpadanje z brezposelnostjo in slabo gospodarsko situacijo. V zgodnjih dvatisočih, ko se je turbulentno obdobje restavracije kapitalizma zaključilo, je število ubojev, umorov in samomorov začelo padati. Med gospodarsko krizo je število umorov spet naraslo, vendar ne do prejšnje stopnje. Po letu 2015 število vseh teh pojavov spet pada. Kot kažejo ti podatki je materialno pomanjkanje in beda brezposelnosti, ki v času kriz prizadeneta delavski razred, med vzroki za povečanje nasilja v družbi, predvsem med bližnjimi.
Visoka stopnja intimnopartnerskega nasilja je še eno področje na katerem institucionalizirani in globoko družbeno zakoreninjeni patriarhalni odnosi med spoloma kažejo svoj grd in krvav obraz. Zaostrovanje kapitalizma zaostruje tudi razlike med spoloma in vpliva na odnose znotraj družine. Ženske na kapitalističnem trgu niso konkurenčne moškim že zaradi tega, ker bodisi rodijo, bodisi se od njih pričakuje, da bodo skrbele za svoje otroke in pozneje ostarele sorodnike. Zato opravljajo slabše plačane službe, službe za krajši delovni čas ipd. Ob tem imamo še sistem, ki temelji na hraniteljstvu in odvisnosti od partnerja, kar pa ustvarja neenake pogoje in močno otežuje ekonomsko osamosvajanje žrtev, ki je pa predpogoj za preživetje. To pa je, kot vidimo, lahko smrtno nevarno.
Kapitalizem proizvaja bedo, ki se odraža tudi v razkroju tudi najtesnejših medosebnih vezi, ki se prepogosto raztrgajo na najbolj krvav in tragičen način. Sklepamo lahko, da se bo v luči neoliberalizacije, ki se pospešeno odvija pod »najbolj levo vlado v zgodovini Slovenije«, okoljske krize in vedno večje individualizacije družbe v naslednjih letih začelo povečevati število umorov, ubojev in samomorov. Ta trend lahko obrne le organizirano delavsko gibanje, ki bo nadomestilo individualizem s skupnostjo (in ne le s tovrstnimi floskulami), ki bo osmišljala življenja tovarišic in tovarišev v boju za boljši svet.
The post KDO V SLOVENIJI NAJPOGOSTEJE MORI? first appeared on Rdeča Pesa.
Z vami delimo prispevek Farida Tamalaha o nasilju izraelskih naseljencev. Farid živi in kmetuje na Zahodnem bregu ter nam je že v preteklosti pošiljal prispevke na temo Palestine. Naslednji prispevek je bil objavljen v Middle East Monitor, mi pa smo ga za vas prevedli v slovenščino:
“Na hribih okupiranega Zahodnega brega se vsak dan odvija nenavadna in boleča ironija: isti izraelski naseljenci, ki zasegajo palestinsko zemljo, sežigajo naše oljke in streljajo na naše kmete, zdaj posnemajo prav tisti način življenja, ki ga uničujejo.
Kot palestinski kmet vem, da je prišel čas, ko se izteka Dan križa (Youm Al-Salib). Po tem prazniku začnejo padati prve kaplje dežja in barva oljk se začne spreminjati. Zrak postane vlažen in napoveduje novo letino olja. Vzamem orodje, zberem družino in se odpravim na polja. To so starodavni obredi, ki mi jih je prenesla mama, ki je poznala vse znake zemlje na pamet – kdaj je treba obrezovati oljke, kdaj pobirati pridelek, kdaj počivati.
Zemlja diši po timijanu in vlažnem blatu; ptice pojejo, kot da blagoslavljajo sezono. Za trenutek se zdi, da vlada mir – dokler moj pogled ne pade na vrh hriba, kjer vidim naseljence, ki kampirajo na grebenu, s puškami prek ramen – pretvarjajo se, da so kmetje, nam pa odrekajo pravico do kmetovanja. To je isto kot da ubiješ žrtev in potem hodiš v njenem pogrebnem sprevodu.
Okupacija in kulturna prisvojitev
Naseljenci zasedajo vrhove gora, ki gledajo na naše vasi, kjer so nekoč pastirji pasli svoje črede in kmetje obdelovali terase, ki so jih izklesali njihovi predniki. Oskrunili so avtohtono pokrajino naše domovine. Sovražijo nas – ljudi te dežele –, prezirajo naš jezik, glasbo in kulturo, vendar posnemajo naše podeželske tradicije, kot da bi bile njihove.
V zadnjih letih so se po Zahodnem bregu kot gobe po dežju razmahnile nezakonite naselbine. S teh vrhov hribov naseljenci nadlegujejo pastirje, kradejo oljke in izganjajo družine z njihovih zemljišč. Po podatkih organizacij B’Tselem in ARIJ je nasilje naseljencev doseglo rekordno raven – vsako leto pride do tisočih napadov na palestinske kmete, domove in sadovnjake. Organizacija OCHA OZN je zabeležila več kot 45-odstotno povečanje napadov od lani. Na desetine družin je bilo prisiljenih zapustiti svojo zemljo. Cilj je jasen: izbrisati avtohtono prebivalstvo, ukrasti ne le zemljo, ampak tudi način življenja, folkloro in kulinariko.
Na istih hribih naseljenci po palestinskih običajih organizirajo poroke pod oljkami, ročno obirajo oljke, na lesnem ognju kuhajo šakšuko (ocvrte paradižnike v oljčnem olju z jajci), v počrnelih pločevinastih loncih kuhajo čaj in igrajo na šibabeh, flavte, ki odmevajo v palestinskih vaseh od Jenina do Hebrona. Oblečejo se v grobe bombažne srajce, uredijo majhne vrtove (hakure) in se pretvarjajo, kot da so podedovali vez z zemljo, ki so jo ukradli.
To imenujejo »vrnitev k naravi«, vendar je to le predstava – obupan poskus, da bi ustvarili pripadnost, kjer je ni. Njihovo posnemanje ni občudovanje; je prisvojitev, ki izhaja iz kompleksa nezakonitosti. V globini duše vedo, da so tu tujci. Čutijo praznino izkoreninjenosti in jo poskušajo zapolniti z izposojenimi simboli in ukradenimi tradicijami: uničujejo oljke, vendar hrepenijo po njihovi senci; izganjajo kmete, vendar zavidajo njihovo preprostost; zasedajo zemljo, vendar posnemajo življenje tistih, ki so jih razlastili. Njihova želja, da bi se pokazali kot domačini, hkrati razkriva njihovo odtujenost.
Zemlja je naša identiteta
Za nas Palestince zemlja ni način življenja ali vikend izlet – je zgodovina, spomin in identiteta. Vsak oljčni nasad nosi zgodbe generacij. Vsako zemljišče nosi arabsko ime, povezano s spominom na ljudi, ki so tu živeli tisočletja. Vsak izvir ima ime, vsaka terasa zgodbo. Vsak kamen je bil dvignjen z rokami, ki so ljubile to zemljo in poznale njene skrivnosti.
Ko vidim naseljence, ki plavajo v naših izvirih, postavljajo piknik mize ob naših vodnjakih ali prirejajo poroke ob palestinski ljudski glasbi, čutim več kot jezo: čutim žalost pomešano z dvomom. Naseljenci uničujejo korenine in potem se pretvarjajo, da so zakorenjeni. Ubijajo kmete in potem pojejo kmečke pesmi. Lahko posnemajo geste pripadnosti, vendar ne morejo podedovati njene duše. Lahko kuhajo šakšuko, vendar je nikoli ne bodo okusili tako kot mi – začinjeno z delom, potrpežljivostjo in hrepenenjem. Lahko pojejo naše pesmi, vendar njihovi glasovi nikoli ne bodo izražali ljubezni in bolečine, ki so jih oblikovali. Naše bistvo je ustvarjeno iz prsti te dežele.
Zemlja si zapomni
Njihovo posnemanje razkriva globoko resnico: palestinski način življenja je pristni izraz te zemlje. Naseljenci želijo delovati kot domačini, se zliti z okolico in izbrisati vidne znake okupacije. A ne glede na to, koliko si sposodijo, njihova prisotnost ostaja nasilni vdor. Resnice ne morejo izbrisati z oljčnim oljem ali prikriti krivice z ljudsko pesmijo. Dokler naseljenci nadaljujejo z ubijanjem kmetov, krajo oljčnega pridelka in izganjanjem družin iz njihovih domov, bodo njihovi poskusi, da se ukoreninijo, ostali prazni. Lahko zasedajo vrhove hribov, toda resnice ne morejo prikriti.
Naseljenci lahko posnemajo naše življenje, a ne morejo posnemati naše ljubezni do te dežele – ljubezni ni mogoče ponarediti, korenin pa ni mogoče presaditi s silo. Lahko si sposodijo naše pesmi, hrano in običaje, a ne morejo podedovati stoletij skrbi, znoja in predanosti, ki so oblikovali to deželo in njene prebivalce.
Ta zemlja bo vedno poznala svoje otroke – tiste, katerih koža nosi njen prah, katerih jezik se je rodil na njenih hribih, katerih pesmi se dvigajo z njenim vetrom. Naša koža je barve njene prsti, naša srca bijejo v njenem ritmu. Nobeno posnemanje, nasilje ali okupacija ne more spremeniti te resnice. Oljke bodo preživele in mi tudi.”
Originalen prispevek dostopen na: https://bit.ly/3LEIfXC
Foto: B’Tselem
The post Naseljenci, ki ubijajo palestinske kmete in posnemajo njihovo življenje first appeared on Rdeča Pesa.
Romi so manjšina pri nas o kateri se ve najmanj. Poglede na njih v glavnem oblikujejo govorice, ki jih slišimo od (bolj ali manj rasističnih) sorodnikov ali prijateljev z Dolenjske, ter medijsko poročanje. To celo med tovariši, ki imajo pogosto pronicljiva in kritična stališča, vodi do spontanih pozicij, ki se v marsičem ne razlikujejo od pogledov najbolj nazadnjaških elementov družbe. Za odmik od spontanih pogledov in razumevanje današnjega položaja Romov moramo razumeti njihovo zgodovino na našem prostoru.
Romi se na začetku drugega tisočletja začeli seliti iz Indije, na današnjem slovenskem ozemlju so prvič omenjeni v 14. stoletju. V naslednjih stoletjih so skozi ta prostor predvsem potovali in se le občasno naseljevali. Današnji Romi so prišli na slovenski prostor iz različnih smeri, Romi iz madžarske in srednjeevropske smeri so naselili Prekmurje, medtem ko so se Romi iz balkanske smeri naselili jugovzhodni Sloveniji. Oboji so se v večjem številu priselili v 18. stoletju. Že od časa Marije Terezije so oblasti Rome večkrat skušale stalno naseliti. To lahko razumemo v luči razvijajočega se kapitalizma in zakonov proti »klatežem«, ki so jih sprejemali v mnogih evropskih državah. Ti zakoni so bili še posebej poostreno po zemljiški odvezi namenjeni kmetom, ki so ostali brez zemlje in se zato potikali po mestih in gozdovih, namesto da bi si lomili hrbtenice v nastajajočih tovarnah in delavnicah. Zaradi podobnih razlogov so skušali stalno naseliti Rome in iz njih narediti (poceni) delovno silo ali pa vsaj povečati število brezposelnih, kar bi znižalo ceno delovne sile. Zaradi tradicionalnega romskega (pol)nomadizma, preganjanja je bil ta proces običajno neuspešen. Nekaj Romov živi tudi v mestih, vendar predstavljajo manjšino, del teh se je priselil v času vojn po razpadu Jugoslavije.
Prekmurski Romi so se od 17., predvsem pa 18. stoletja naseljevali z Madžarske. Ukvarjali so se predvsem z različnimi obrtmi, kot so kovaštvo, izdelovanje svedrov, brušenje rezil, popravljanje dežnikov in podobno. Občasno poseljena romska naselja se na tem prostoru omenjajo že v 18. stoletju. Prva stalna naselja Romov onkraj Mure segajo v čas pred prvo svetovno vojno. Običajno so se na kupljeno krpico zemlje najprej naselili kovači in njihove družine, okrog njih pa so se kmalu naselili sorodniki, mnogokrat na zemlji lokalnih kmetov. Ta naselja so bila pogosto dokaj nagnetena in z malo pripadajoče obdelovalne zemlje. Položaj okoliških kmetov pa ni bil pretirano drugačen od položaja Romov. Za Madžarsko kraljestvo, ki mu je pripadalo Prekmurje, je bilo namreč značilno, da so kmetje zemljo razdelili med vse potomce in tako so tudi oni bili nagneteni na posestvih, ki so obsegala le 1/256 (če karikiramo) celotne hube (kmetije). Zaradi skromnih posesti so, podobno kot Romi, mnogi kmetje odhajali na sezonska dela na večje gosposke posesti v tujino. V teh razlogih verjetno lahko iščemo vzroke za večjo bližino med skupinama ter večji uspeh politike integracije v času socialistične Jugoslavije. V času socializma je bil zaposlen velik del Romov (v celotni Jugoslaviji je številka okrog 50%). V Prekmurju so mnogi delali v tekstilni tovarni Mura.
Situacija v jugovzhodni Sloveniji je bila drugačna. Romi so bili še po drugi svetovni vojni (pol)nomadski. Začasna naselja so postavljali na gmajnah (skupnih zemljiščih) in se selili iz kraja v kraj. Ukvarjali so se predvsem s konjerejo, zaradi paše konj so se mnogokrat premikali po pokrajini, prav tako pa so nabirali zelišča za farmacevtsko (Krka) in živilsko industrijo. Stalno so se v veliki meri naselili v šestdesetih in sedemdesetih letih na omenjenih gmajnah. Po razpadu Jugoslavije so bile gmajne razkosane in privatizirane – zemljišče, ki je bilo prej v skupni lasti je bilo nenadoma razdrobljeno med več lastnikov, na katerih so živeli Romi. To je na eni strani med novimi lastniki povzročilo nezadovoljstvo, po drugi strani pa so romska naselja stala na več manjših posestih, kar še danes močno otežuje komunalno urejanje teh naselij.
S stalnim naseljevanjem je razpadala tudi rodovna družbena struktura. Lokalni poglavarji, ki so ponekod še po drugi svetovni vojni igrali nekakšno vlogo posrednika med romsko in okoliško skupnostjo, so izginili. Prevladujoč pomen rodovne skupnosti je začela nadomeščati vaška skupnost. Nekateri sorodniki so se zaradi takšnih ali drugačnih razlogov odselili, priselili pa so se sorodstveno nepovezani Romi. Po razpadu Jugoslavije in posledičnem propadu tekstilne industrije ter vpetjem Slovenije v globalni trg so bili Romi med prvimi, ki so izgubili službe. Zaradi rasne diskriminacije so po ekonomskem okrevanju do njih težje ponovno prišli. V Prekmurju se je ta problem delno omilil zaradi dnevnih migracij v Avstrijo, kamor hodijo delat nekateri Romi. V jugovzhodni Sloveniji pa so se ob pomanjkanju alternative nekateri zatekli k organiziranemu kriminalu, ki je zaradi bližine meje s Hrvaško lažji (na primer tihotapljenje drog in beguncev), nekateri pa so obupali nad iskanjem zaposlitve ter živijo od socialnih prejemkov. Redki med njimi (nekaj več kot 10%, v Prekmurju pa okrog 44%) zaključijo osnovno šolo,še manj pa jih nadaljuje izobraževanje. Policija je podkupljena ali pa se boji povračilnih ukrepov, zato proti organiziranemu kriminalu ne ukrepa. Beda v romskih naseljih narašča, zaradi česar romske skupnosti še nadalje razpadajo, poglabljajo se patriarhalni in nazadnjaški kulturni vzorci, kriminal in konflikti naraščajo, okoliški prebivalci pa so vedno bolj sovražni do Romov, medtem ko lokalne oblasti in država pa problema ne rešujejo.
Zaradi teh zgodovinskih okoliščin so razmere med Romi v Prekmurju in v jugovzhodni Sloveniji tako različne. Medtem, ko je situacija v Prekmurju boljša, v jugovzhodni Sloveniji na obzorju ni naprednih rešitev, še manj pa politične sile, ki bi jih uresničila. Z desne je slišati bolj ali manj skrite pozive k nasilju in zatiranju Romov, liberalci pa romantizirajo romski način življenja ali pa medlo pozivajo k integraciji. Mi moramo razmišljati onkraj teh ozkih okvirjev. Upravičeno lahko domnevamo, da integracija Romov, ki danes v resnici pomeni le integracijo v trg delovne sile (kar različne države poskušajo že vsaj 250 let!) na tak način ne bo uspešna. Četudi se uspe Rome množično izobraziti in nato spraviti na trg delovne sile, bodo (če sploh) v splošnem dobili najslabše službe, kot je značilno tudi za druge manjšine. Ob tem se bo zdel izmik iz kapitalističnega kolesja ponovno vabljiv.
Drugačne okoliščine zahtevajo tudi drugačno državno politiko pri reševanju romskega vprašanja, ki je v zadnjih tednih ponovno prišlo na dnevni red zaradi umora v Novem mestu. »Najbolj leva vlada v zgodovini Slovenije« je izrabila situacijo za uvedbo skrajno represivnega zakona (ki ne bo vplival le na Rome), ki legitimira skrajno desne pozicije in represijo kot rešitev družbenih problemov. Na drugi strani pa desne stranke s svojim pritlehnim politikantstvom izrabljajo to situacijo za pridobivanje političnih točk pred volitvami in protestirajo skupaj z Inštitutom 8. marec in neonacisti pomešanimi med zaskrbljene Dolenjce. Prebivalcem jugovzhodne Slovenije in Romom ne ponujajo rešitev. To kaže na širšo politično krizo obstoječega režima in izpraznjenost političnih strank, saj namesto reševanja vse bolj perečih družbenih problemov ponujajo le kulturni boj. Na nas je, da zgradimo delavsko gibanje, ki zlomilo obstoječ sistem ter zgradilo bolj zelen in vključujoč svet.
The post O »ROMSKEM VPRAŠANJU« first appeared on Rdeča Pesa.
Projekt Pol-pismeni je nastal, da v kratkih skečih in zgodbah ilustrira različne tegobe, ki jih v družbi sproža slaba medijska pismenost prebivalcev. Preko humornih zgodb iz življenj različnih posameznic in posameznikov (od starih do mladih, od tehno-optimističnih do tehno-skeptičnih, mestnih in podeželskih, tistih, ki se v družbi dobro znajdejo in tistih, ki se še ne) na lahkoten način obravnava resno temo.
Temo projekta bi radi predstavili tudi širši javnosti, zato bomo v sklopu projekta predstavili nevarnosti medijske (ne)pismenosti po nekaj krajih v Sloveniji.
Vabljeni!
The post Pol-pismeni na turneji – Domžale first appeared on Računalniški muzej.
Projekt Pol-pismeni je nastal, da v kratkih skečih in zgodbah ilustrira različne tegobe, ki jih v družbi sproža slaba medijska pismenost prebivalcev. Preko humornih zgodb iz življenj različnih posameznic in posameznikov (od starih do mladih, od tehno-optimističnih do tehno-skeptičnih, mestnih in podeželskih, tistih, ki se v družbi dobro znajdejo in tistih, ki se še ne) na lahkoten način obravnava resno temo.
Temo projekta bi radi predstavili tudi širši javnosti, zato bomo v sklopu projekta predstavili nevarnosti medijske (ne)pismenosti po nekaj krajih v Sloveniji.
Vabljeni!
The post Pol-pismeni na turneji – Izola first appeared on Računalniški muzej.
Potem, ko smo zadnjič na MOSS naredili pregled zadnjih 2-3 let odkar so Kiberpipina srečanja spet zaživela, je čas, da se zazremo v prihodnost.
Tokrat bomo
Torej, če te zanima kaj se bo dogajalo v prihodnjem letu, ali pa – še bolje – si želiš vplivati na to, je ta dan idealen, da prideš na srečanje.
The post c| srečanje № 29: Plani za 2026, širjenje skupnosti (in HackerTrain na FOSDEM) first appeared on Računalniški muzej.
Projekt Pol-pismeni je nastal, da v kratkih skečih in zgodbah ilustrira različne tegobe, ki jih v družbi sproža slaba medijska pismenost prebivalcev. Preko humornih zgodb iz življenj različnih posameznic in posameznikov (od starih do mladih, od tehno-optimističnih do tehno-skeptičnih, mestnih in podeželskih, tistih, ki se v družbi dobro znajdejo in tistih, ki se še ne) na lahkoten način obravnava resno temo.
Predstavili bomo 20 novih zgodb in se podružili z vsemi, ki so posodili glasove, da je nova sezona še bolj pisana in raznolika. Vljudno vabljeni!
The post Pol-pismeni – dvajset novih zgodb first appeared on Računalniški muzej.
Projekt Pol-pismeni je nastal, da v kratkih skečih in zgodbah ilustrira različne tegobe, ki jih v družbi sproža slaba medijska pismenost prebivalcev. Preko humornih zgodb iz življenj različnih posameznic in posameznikov (od starih do mladih, od tehno-optimističnih do tehno-skeptičnih, mestnih in podeželskih, tistih, ki se v družbi dobro znajdejo in tistih, ki se še ne) na lahkoten način obravnava resno temo.
Temo projekta bi radi predstavili tudi širši javnosti, zato bomo v sklopu projekta predstavili nevarnosti medijske (ne)pismenosti po nekaj krajih v Sloveniji.
Vabljeni!
The post Pol-pismeni na turneji – Krško first appeared on Računalniški muzej.
Projekt Pol-pismeni je nastal, da v kratkih skečih in zgodbah ilustrira različne tegobe, ki jih v družbi sproža slaba medijska pismenost prebivalcev. Preko humornih zgodb iz življenj različnih posameznic in posameznikov (od starih do mladih, od tehno-optimističnih do tehno-skeptičnih, mestnih in podeželskih, tistih, ki se v družbi dobro znajdejo in tistih, ki se še ne) na lahkoten način obravnava resno temo.
Temo projekta bi radi predstavili tudi širši javnosti, zato bomo v sklopu projekta predstavili nevarnosti medijske (ne)pismenosti po nekaj krajih v Sloveniji.
Vabljeni!
The post Pol-pismeni na turneji – Tržič first appeared on Računalniški muzej.
Jesenska edicija Ruby meetupa prihaja v Računalniški muzej.
Nadaljevali bodo z “alternativnim formatom” srečanj:
– prikaz projektov (show and tell),
– odprta razprava,
kar pomeni, da je cel dogodek v rokah udeležencev.
Če imate projekt, ki bi ga radi predstavili, ga prinesite in povejte kaj več o njem. Če se je med programiranjem pojavila kakšna tehnična dilema, jo lahko rešite z idejami drugih udeležencev. Če pa imate super nove ideje, ki so morda vredne debate, ker rušijo meje ustaljenih praks, bo zagotovo dober prostor, da jih pokažete še drugim.
The post Autumn Ruby meetup first appeared on Računalniški muzej.
Pred začetkom konference JCON GenAI Ljubljana vas OpenBlend – Slovenian Java User Group vabi na sproščen Java & GenAI meetup z legendo Jave – Adamom Bienom!
Dogodek bo potekal večer pred konferenco (preveri agendo, saj je še nekaj mest prostih
https://genai.jcon.one/agenda).
Vstop je brezplačen, število mest pa omejeno.
###
Kaj te čaka?
Adam Bien bo delil svoje izkušnje, razmišljanja in “battle stories” o razvoju v Javi v dobi umetne inteligence (GenAI) – o tem, kako se spreminja vloga Jave v sodobnem svetu, kako jo lahko povežemo z AI orodji in kaj nas čaka v prihodnosti.
Ne pričakuj PowerPoint maratona – le sproščen pogovor, veliko praktičnih vpogledov in priložnost, da Adama kaj vprašaš v živo.
###
Po predavanju
Po uradnem delu sledi sproščeno druženje z Adamom in vsemi predavatelji konference JCON GenAI Ljubljana, ki bo potekala naslednji dan.
Ob prigrizkih in pijači bo priložnost za pogovor, mreženje z razvijalci ter predavatelji iz mednarodne skupnosti.
The post GenAI z Adam Bienom! first appeared on Računalniški muzej.
Mentorstvo je eno od spregledanih, a zelo pomembnih orodij za vodstvene kadre.
Kako ustvariti kvalitetno mentorstvo, kakšne izzive prinese ter kako vpliva na osebno in poslovno okolje.
V manjših skupinah bodo potekali pogovori, ki se bodo dotaknili vprašanj, kot so: Kaj odlikuje mentorja? Kako vzpostaviti pravi odnos?
The post Tech Leads Meetup first appeared on Računalniški muzej.