Ah, that makes sense. Most cloud providers have the full nine yards with online hardware provisioning and imaging I forgot you could still just rent a real machine.
vcmj
Hmm, wonder if there was some reason they didnt just extract the original certificates from the VPS if it was actually the hosting provider, I mean even with mitigation it should be sitting in a temp folder somewhere, surely they could? Issuing new ones seems like a surefire way to alert the operators, unless they already used Let's Encrypt of course.
Guess I'll just wait for this "sudo dd if=/dev/null of=/dev/sda" command to complete then since its unethical to kill it. Wonder what it does...
They previously did not use APEX but that seems to have changed recently: https://github.com/GrapheneOS/grapheneos.org/commit/7bf9b2671667828d1553c92bf4f64cc749b74d0b Regardless it will need the verified boot keys it seems so Google can't update them, likely the devs will take responsibility to update the CAs. No idea if they will restore the user control though.
I feel like this is just describing the future of business processing consultants. Like there's already a role for this, unless I'm missing something?
I'd say as long as you've factually verified the answer yourself I don't think using an LLM to help you answer a question is bad, its about the same as using a search engine, but please don't just ask it the same question and paste the answer here. That actually has the potential for harm and would be unwelcome.
Yep, got failures building GrapheneOS and the devs of that ROM made a big fuss on their Twitter when they encountered the failure themselves. The kernel devs really messed up with the way they deployed this thing
Oooh looks like it can use a sort of inline Jupyter notebook that's actually really cool. Hopefully it doesn't have network access in the sandbox or it can definitely try it's hand at hacking if asked lol
I think the part that annoys me the most is the hype around it, just like blockchain. People who don't know any better claiming magic.
We've had a few sequence specific architectures over the years. GRU, LSTM and now Transformers. They were all better than the last at the task of sequence specific transformations, and at least for the last one the specific task was language translation. We eventually figured out these guys have a bit of clairvoyance too, they could make accurate predictions based on past data, or at least accurate enough to bet on, and you can bet traders of various stripes have already made billions off that fact. I've even seen a transformer based weather model. It did OK, but transformers are better at language.
And that's all it is! ChatGPT is a Transformer in the predictive stance. It looks at a transcript of a conversation and thinks what a human is most likely to say next. It's a very complex transformation of historical data. If you give it the exact same transcript, it gives the exact same answer. It is in the literally mathematically rigorous sense entirely incapable of an original thought. Any perceived sentience is a shadow of OpenAI's army of annotators or the corpus it was trained on, and I have a hard time assigning sentience to tomorrow's forecast, which may well have used similar technology. It's just an ultra fancy search engine index.
Anyways, that's my rant done I guess. Call it a cynical engineer's opinion. To be clear I think it's a fantastic and useful technology, and it WILL change how we interact with machines. It can do fancy things with the combination of "shell" code driving it's UI like multi-step "agents" or running code, and I actually hope OpenAI extends it far into the future, but I sincerely think any form of AGI will be something entirely different to LLMs, or at least they'll only form a small part of it as an encoder/decoder for it's thoughts.
EDIT: Added some paragraph spacing. Sorry, went into a more broad AI rant rather than staying on topic about coding specifically lol
Like others have said, practice is key, however I'd like to add that you should not feel too discouraged if it feels like you're making no progress. You're probably making more headroom than you realize. At least personally in programming more than anything else I have occasionally only seen results after I came back to a concept I gave up on learning.
spoiler
Yeah, in my mind I thought of it more as a "why not" in addition to vision. Like why make it only as capable as the humans its trying to replace when it can have even more data to work with? Probably would have been even more expensive though
Maybe a bit late, but I've worked on this kind of functionality. I did not work on the algorithm, but the guys who did say:
This is of course based on trust, but these are the claims.