- Sunday, 10/22/23 ** 02:58 Stockholm isn't like New York - you can't pretend that there are infinite opportunities. Miss one social commection and you're out of friends for the year. Try again next time. ** 03:13 On https://www.youtube.com/watch?v=5DePDzfyWkw --
I thought Rossman knew what he was doing but this is such an obvious miss. He's completely ignoring the fifteen years of failures of similar projects within the last year.
How many 'decentralized identity providers' are there? How many third party centralization attempts? How many secure, ad-free services?
Meta, Twitter, Reddit have all killed expressive API access within the last year - you can data dump, pay lots of money, or give up on it. YouTube is so close to doing the same - blocking adblockers is the first step towards requiring ad consumption or management.
AI data moats are the last straw here, and Google - positioning itself as a direct competitor to OpenAI - has every reason to lock up their APIs in exactly the same way. Rossman's app will never become big or popular enough to make YouTube shut off the API - though I'm sure he will claim this. Such a change will happen in spite of the few hundred users of the app.
The identity provider take also falls as flat as a freshman business student trying to 'start a startup in the bay area'. Oh look, there are N companies providing platform identities. I can't get them to talk to each other to validate legitimacy because legitimacy (or verification) is platform leverage, and no company is going to spend developer time and money to give another company that leverage. How do I solve this? I'll build company n + 1 and make a data moat of verification for the other n platforms!
Keybase tried this and their proofs worked super well. I loved using that app but they kept throwing security-related stuff at the fan because, regardless of being open-source, building a relatively strong brand, and providing proof of identity - they couldn't find a reason compelling enough to be the n + 1 company, so they folded. Servers cost money. They threw data storage on the pile, E2E encrypted messaging, cryptocurrency wallets to support your decentralized identity.
Louis'll say that they failed because they dove into crypto. They clearly just never found product-market fit, kept throwing stuff on the pile, and now they sold to Zoom - the marketing-pump-in-pandemic-fueled video calling app - something that felt like an off-the-shelf Electron student project from a coding bootcamp - that bragged about signing anticompetitive contracts and never paying a designer, then refusing to implement key accessibility features for schools. They needed competent staff to patch their security holes (and there were many), so they bought an aimless company to nab the staff.
How many open source beggars have there been for the last ten years? 'My library is free - but please give me a donation.' Nobody. Prominent library maintainers burn out and drop off when they're making 20 bucks a month off donations and putting in two hours a day - in addition to their salaried job. DRM-free and open-source-but-please-pay-us are fun ideas, but video hosting and streaming cost a hell of a lot - and so few people go out of their way to pay for something unless they're explicitly paywalled out of it. ** 03:32 By the way, I seriously do wish the best for Rossman; I hope his project works and he gets hundreds of millions of users and can afford to hire lots of people to build the distributed identity provider of the future.
I seriously want these tools to exist almost as much as he does. I just don't see how this venture can work out.
(Best-case scenario here - the company reaches tons of users and receives tons of financial support. Turns out, though, that video hosting platforms can't cut a loss and neither serve ads or charge money for videos.
Optimistically, the platforms in question cut a deal trading dollars for API access. This is the video streaming mess but slightly better because everything is available throuhg a homogenous platform.
Is it possible for these video streaming services to serve a large fraction of content without receiving compensation?)
** 03:57
My approach to React code is literally just small-scale MVC. A custom hook, or hooks, form the data model. The JSX at the bottom of the component is the view. The compatibility layer is implemented somewhere in between - declaring const onClick
to fetch some data, check some UI bookkeeping, save some user input, mediating between all of them. I haven't learned much of anything.
** 04:00
To that end - my approach to coding is just interface design. I start at the top and write a file, hallucinating interfaces from other files. I implement those interfaces in a way that makes sense rather than adhering strictly to the framework I established - within reason. Then I run the code, the differences produce errors, and I coax out some substance.
** 23:23
I love when new features 'fall out' of existing designs. The fact that I can use the import infrastructure designed for jake.isnt.online to bootstrap the website itself is really beautiful.
The solution I have gets around the expression problem, in a way, by faking multiple dispatch.
- Constructors automatically compile files from parent to child if the file doesn't yet exist.
- Paths are always immutable but 'just work' everywhere, regardless of whether we have a naked string or the object because we check for them in one key, weird looking case. If you accidentally pass a string as a path (I've been there lots of times with the previous codebase), we fix for you.
- javascript files are loaded with the same infrastructure that loads the files we compile with. they feel a bit too 'special-casey' right now, but I think general approaches will naturally fall out of the files as I write more code, rework, abstract, etc...
- Instantiating classes dispatches to specific instances of those classes, but the caller doesn't ever have to know which class they have an instance of, ever. Methods always just work.
- Abstracting more actually allows us to obscure and avoid overhead; we can decide when to read the file from disk, when to parse, it, etc. as the user interacts with the file in different ways. Complete file state is cached, pre and post compilation, because computers have more memory than we know what to do with (and we aren't deep copying everything in JS like we are in java world). Getters as immutable functions allow us to pretend that property access just works. (I don't think this is important, but it is fun...)
Time to learn some more math... ** 23:37 How does hot reloading with dependencies work?
When a dependency is created, it tracks which files depend on it and which files it depends on. When I change that file, I fetch, compile, whatever the new version, then notify the files upstream to make that dependency change. Lazy implementation is completely re-executing everything upstream that's dependent. Good implementation is pinpointing exactly what needs an update and fixing it.
Surgically replacing parts of files when statically generating a site isn't worth it, but operations like replacing an HTML structure with a new one or re-importing just a specific JS file without changing the whole stack are worth exploring. We had this with the clojure implementation.
By the way - this code is so, so much easier to roll than Clojure. It's incredible how well it works, how fast the code runs, how quiet my computer is when running it; there is no kick into high gear or fire on all cylinders mode like the insane Clojure JVM startup was. The bun repl is good enough to test ideas out locally or try out modules, but I should also implement some tests at some point... right?
- public document at doc.anagora.org/2023-10-22
- video call at meet.jit.si/2023-10-22