📚 node [[2023 10 22]]

<<<<<<< HEAD

  • [[trip to x]]!
    • Flying to [[Hong Kong]] and then [[Tokyo]] today.
    • With [[AG]] :)
    • Very happy about these holidays! They've been planned for long, and as work got tough in the last few months I relied on "seeing them coming" quite a bit.
    • I'll be very jet lagged but also likely happy in Shinjuku for the first few days.

As I write this, I'm roughly above [[Baku]] about to cross the [[Caspian Sea]]. I don't have an internet connection so I'm jotting down these local notes which will be synced to the Agora later.

I guess much has already been said about the relatively rareness of being offline nowadays; I am old enough to remember a time before being online at all was possible; then a time in which being online was rare; then the transition to always-on home internet and then mobile internet. I welcomed each increment of extra connectivity, and I still love how far we've gotten in this respect; but I can also appreciate the focus that being fully offline for a bit seems to bring. If nothing else it announces that the same focus is always available -- behind the impulse to catch up with messages, or check feeds, or read about Baku and the Caspian Sea on Wikipedia (which is surely what I would be doing right now instead of writing these words were I not truly offline.)

I'm thinking a bit of Agora development during these holidays; it might or might not happen, based on all the sightseeing and experiencing we'll be doing out there in the analog world :) But I thought it would still be nice to think of which things I could improve in the Agora if I have some time available.

I might write some [[executable subnode]] or other, if nothing else because they are fun and self-contained.

I think I will try to do one or two quick iterations on the [[Agora Server]] UI, maybe finishing the move to [[zippies]] as base widget as I've already done for nodes, stoas and most sections really. If I am able to move all sections under the search button/field to zippies the UI will probably look a lot more streamlined/be easier to understand, less confusing (this I'm guessing based on earlier feedback). Also it's not hard to do and it is apparent, so it sounds fun.

Moving on to larger things, [[mycoverse]]/[[fediverse]] integration is something I would love to get done in this Q4 2023 so getting started on it would make a lot of sense. I would love to understand what is the minimum that Agora Server would need to do to be able to expose user accounts as Fediverse feeds. Then new/updated nodes could generate something close to new posts/notes? Unsure.

Also, some playing with an hypothetical [[knowledge commons extension]] for e.g. [[Obsidian]] or [[Logseq]] or [[VSCode]] could be in order after the conversation last week with the [[fellowship of the link]]. But one blocker there is that I'm currently not using either Obsidian or VSCode as garden editors, so I'm not directly scratching an itch. Having said that, moving back to Obsidian or Logseq or [[Foam]] for a bit could make sense to see how far they've gone since the last time I've used them. It's still a shame Obsidian is not free software though.

2c5b52a413a40d92a8033377f285a8589e4e12e5

2023-10-22

  • Sunday, 10/22/23 ** 02:58 Stockholm isn't like New York - you can't pretend that there are infinite opportunities. Miss one social commection and you're out of friends for the year. Try again next time. ** 03:13 On https://www.youtube.com/watch?v=5DePDzfyWkw --

I thought Rossman knew what he was doing but this is such an obvious miss. He's completely ignoring the fifteen years of failures of similar projects within the last year.

How many 'decentralized identity providers' are there? How many third party centralization attempts? How many secure, ad-free services?

Meta, Twitter, Reddit have all killed expressive API access within the last year - you can data dump, pay lots of money, or give up on it. YouTube is so close to doing the same - blocking adblockers is the first step towards requiring ad consumption or management.

AI data moats are the last straw here, and Google - positioning itself as a direct competitor to OpenAI - has every reason to lock up their APIs in exactly the same way. Rossman's app will never become big or popular enough to make YouTube shut off the API - though I'm sure he will claim this. Such a change will happen in spite of the few hundred users of the app.

The identity provider take also falls as flat as a freshman business student trying to 'start a startup in the bay area'. Oh look, there are N companies providing platform identities. I can't get them to talk to each other to validate legitimacy because legitimacy (or verification) is platform leverage, and no company is going to spend developer time and money to give another company that leverage. How do I solve this? I'll build company n + 1 and make a data moat of verification for the other n platforms!

Keybase tried this and their proofs worked super well. I loved using that app but they kept throwing security-related stuff at the fan because, regardless of being open-source, building a relatively strong brand, and providing proof of identity - they couldn't find a reason compelling enough to be the n + 1 company, so they folded. Servers cost money. They threw data storage on the pile, E2E encrypted messaging, cryptocurrency wallets to support your decentralized identity.

Louis'll say that they failed because they dove into crypto. They clearly just never found product-market fit, kept throwing stuff on the pile, and now they sold to Zoom - the marketing-pump-in-pandemic-fueled video calling app - something that felt like an off-the-shelf Electron student project from a coding bootcamp - that bragged about signing anticompetitive contracts and never paying a designer, then refusing to implement key accessibility features for schools. They needed competent staff to patch their security holes (and there were many), so they bought an aimless company to nab the staff.

How many open source beggars have there been for the last ten years? 'My library is free - but please give me a donation.' Nobody. Prominent library maintainers burn out and drop off when they're making 20 bucks a month off donations and putting in two hours a day - in addition to their salaried job. DRM-free and open-source-but-please-pay-us are fun ideas, but video hosting and streaming cost a hell of a lot - and so few people go out of their way to pay for something unless they're explicitly paywalled out of it. ** 03:32 By the way, I seriously do wish the best for Rossman; I hope his project works and he gets hundreds of millions of users and can afford to hire lots of people to build the distributed identity provider of the future.

I seriously want these tools to exist almost as much as he does. I just don't see how this venture can work out.

(Best-case scenario here - the company reaches tons of users and receives tons of financial support. Turns out, though, that video hosting platforms can't cut a loss and neither serve ads or charge money for videos.

Optimistically, the platforms in question cut a deal trading dollars for API access. This is the video streaming mess but slightly better because everything is available throuhg a homogenous platform.

Is it possible for these video streaming services to serve a large fraction of content without receiving compensation?) ** 03:57 My approach to React code is literally just small-scale MVC. A custom hook, or hooks, form the data model. The JSX at the bottom of the component is the view. The compatibility layer is implemented somewhere in between - declaring const onClick to fetch some data, check some UI bookkeeping, save some user input, mediating between all of them. I haven't learned much of anything. ** 04:00 To that end - my approach to coding is just interface design. I start at the top and write a file, hallucinating interfaces from other files. I implement those interfaces in a way that makes sense rather than adhering strictly to the framework I established - within reason. Then I run the code, the differences produce errors, and I coax out some substance. ** 23:23 I love when new features 'fall out' of existing designs. The fact that I can use the import infrastructure designed for jake.isnt.online to bootstrap the website itself is really beautiful.

The solution I have gets around the expression problem, in a way, by faking multiple dispatch.

  • Constructors automatically compile files from parent to child if the file doesn't yet exist.
  • Paths are always immutable but 'just work' everywhere, regardless of whether we have a naked string or the object because we check for them in one key, weird looking case. If you accidentally pass a string as a path (I've been there lots of times with the previous codebase), we fix for you.
  • javascript files are loaded with the same infrastructure that loads the files we compile with. they feel a bit too 'special-casey' right now, but I think general approaches will naturally fall out of the files as I write more code, rework, abstract, etc...
  • Instantiating classes dispatches to specific instances of those classes, but the caller doesn't ever have to know which class they have an instance of, ever. Methods always just work.
  • Abstracting more actually allows us to obscure and avoid overhead; we can decide when to read the file from disk, when to parse, it, etc. as the user interacts with the file in different ways. Complete file state is cached, pre and post compilation, because computers have more memory than we know what to do with (and we aren't deep copying everything in JS like we are in java world). Getters as immutable functions allow us to pretend that property access just works. (I don't think this is important, but it is fun...)

Time to learn some more math... ** 23:37 How does hot reloading with dependencies work?

When a dependency is created, it tracks which files depend on it and which files it depends on. When I change that file, I fetch, compile, whatever the new version, then notify the files upstream to make that dependency change. Lazy implementation is completely re-executing everything upstream that's dependent. Good implementation is pinpointing exactly what needs an update and fixing it.

Surgically replacing parts of files when statically generating a site isn't worth it, but operations like replacing an HTML structure with a new one or re-importing just a specific JS file without changing the whole stack are worth exploring. We had this with the clojure implementation.

By the way - this code is so, so much easier to roll than Clojure. It's incredible how well it works, how fast the code runs, how quiet my computer is when running it; there is no kick into high gear or fire on all cylinders mode like the insane Clojure JVM startup was. The bun repl is good enough to test ideas out locally or try out modules, but I should also implement some tests at some point... right?

📖 stoas
⥱ context