hlfshell

joined 1 year ago
[–] hlfshell 9 points 9 months ago

Boston Dynamic's robots are works of art - the pinnacle of engineering - but its all designed movement. By this I mean the control systems, their movement plans - it is built and designed by experts in their field. It's not quite as simple as "go from A to B and do some parkour on the way". There's a very large gap between "what is mechanically possible to do" and "Just let the robot figure out how to do that".

Mechanically we're ahead of software for manipulation and kinodynamic planning.

[–] hlfshell 6 points 9 months ago* (last edited 9 months ago)

I'm actually working on this problem right now for my master's capstone project. I'm almost done with it; I can have it generating a series of steps to try and fetch me something based on simple objectives like "I'm thirsty", and then in simulation fetching me a drink or looking through rooms that might have a fix, like contextually knowing the kitchen is a great spot to check.

There's also a lot of research into using the latest advancements in reasoning and contextual awareness via LLMs to work towards better more complicated embodied AI. I wrote a blog post about a lot of the big advancements here.

Outside of this I've also worked at various robotics startups for the past five years, though primarily in writing data pipelines and control systems for fleets of them. So with that experience in mind, I'd say we are many years out from this being in a reasonable product, but maybe not ten years away. Maybe.

[–] hlfshell 1 points 11 months ago* (last edited 11 months ago)

It is extremely difficult to get someone to understand something which their paycheck depends upon them not understanding it.

The solutions that would work for climate change - dramatic reduction in consumption, recycling, large scale government regulation and oversight to ensure adoption of these policies - that doesn't make money. New technology does.

The deadliness of climate change extends not from its all encompassing effect, nor the monumental cataclysms it'll unleash, but instead that it's solution requires a complete rethinking of systems that make a select few very powerful people privileged in the first place.

[–] hlfshell 1 points 11 months ago

NewPipe has been flat out not working for me - I figured it was linked to Youtube's recent crackdown. Is it working for other people without issue?

 

I was aiming to use LLMs with robotics in an upcoming project, and needed to first verse myself in what is the current must-know techniques in the space. To that end I read a ton of papers and wrote this article to try and suss out the best parts of current state of the art.

I hope this helps people; I'd be thrilled to discuss much of this as well!

 

I was aiming to use LLMs with robotics in an upcoming project, and needed to first verse myself in what is the current must-know techniques in the space. To that end I read a ton of papers and wrote this article to try and suss out the best parts of current state of the art.

I hope this helps people; I'd be thrilled to discuss much of this as well!

 

A cool application of RLHF (Reinforcement Learning w/ Human Feedback - the same approach as what OpenAI used to train ChatGPT).

The authors trained an agent to fly FPV drones at a level surpassing world champions.

[–] hlfshell 1 points 1 year ago

A light week for me, mostly going through some more ROS2/webots tutorials where I can. If anyone has good resources to recommend, lemme know!

[–] hlfshell 1 points 1 year ago

This isn't mine, it's just an interesting blogpost I came across. Nor am I arguing that it should replace a robotics engineer.

My main thought, not fully represented in the post, is that LLMs can act as a context engine for high level understanding of instructions + spatial awareness, and then apply it to actuation. This is somewhat touched upon in the article.

I do think that there is some interesting work in LLM powered task level planning. I'm hoping to find the time put together a good example of this, utilizing the ability for LLMs to make logical leaps based on instruction. In the article, it took the command "I'm thirsty" to mean move to a drink. In a more applicable application, we can use LLMs to identify that a room with multiple identified objects (refrigerator, oven, stove, cabinets, etc) is in fact a kitchen. Then, from there, determine that "I've seen a room I've identified as a kitchen - I can navigate there to attempt to find a drink".

[–] hlfshell 2 points 1 year ago

Oh yeah, MATLAB is painful. I get why you use it at first - it's great for handling derivations for you when looking at control code, and handles matricies well enough when learning kinematics. But once my homeworks started to demand animations and complex processing I yearned for a language with classes or any advance features at all. Still, managed to make some cool stuff - like this RRT path planned transmission removal 😄

What startup (assuming you're out of stealth mode?) Good luck with the jump over to a startup. It's rough but hopefully you knock it out of the park.

As for code deploy - I've worked on the problem at two startups now. I can probably advise you on some stuff to look into, but would need to know more about the problem space you're specifically looking at. Though I'm hesitant to mention full obfuscation if you're not delivering a finished product but rather a module to the end customer.

 

I have argued for awhile now that the probabilistic nature of LLMs can represent a form of context that, when applied, can be utilized for robotic applications.

Seems I'm not the only one that had this idea. While simple, Microsoft researched applied a high level control library to demonstrate LLMs (ChatGPT) developing robotic task code.

[–] hlfshell 2 points 1 year ago* (last edited 1 year ago) (2 children)

For the self driving car, are we talking about hobbyist size (ie Donkey car), add on tech (ie Comma OpenPilot) or full on autonomous vehicle? Sounds really cool.

Upgrading from an old tech stack to a newer one is always a pain, especially in Python - one of the reasons I hate that it's so widely adapted for deep learning, CV, and robotics.

Hopefully you'll regale us with details, this sounds fascinating.

 

ROS maintainers discuss popular robotics control and navigation algorithms in use within ROS2. The associated discussion can be found here.

If you're looking for what to study or try applying in your own projects next, this is worth a look.

[–] hlfshell 2 points 1 year ago

Last week I started going through ROS2 lessons online in order to familiarize myself on it for some upcoming projects.

First I spent time working on utilizing vagrant, a tool by Hashicorp for building "repeatable" (debatable overuse of that term, but I'll digress) dev environments and VM images to quickly set up versioned ROS environments for me. This was actually pretty easy and after a few hours I had a setup I liked. I will report that I do have some issues running Gazebo in VM on the laptop (to be expected) though it's smooth on the beefier desktop. I am still suffering from occasional complete VM freeze ups - irrecoverable, though the host machine shows no lag or issues there. I think it'll still work for a quick setup of ROS2 for a project team.

Now I'm going through the nav2 stack in ROS and trying to familiarize myself with it. I'm not sure what the scope of the upcoming project is going to be (it's the capstone team project for the entirety of my Masters, so there's a bit of time before decisions have to be finalized). Once that's done I'll probably dive into Webots simulator (especially since my own Gazebo is proving unstable).

 

This is the routine thread where we discuss what you'll be working on this week! A cool robot? A computer vision project? Something cool in reinforcement learning? 3d printing a drive train? Let us know!

Maybe instead you're studying something, or reading a paper that just came out? Post about it!

It’s also okay to say “nothing” too - it’s great for your mental health to take a break!

Looking for help? Ask a question! See someone working on something cool? Ask them about it! No project is too small or too "newbish"!

[–] hlfshell 4 points 1 year ago (1 children)

How're you liking the ID4? That and the ioniq5 are looking pretty good ATM... Though I wish the Honda E or ID3 was sold in the US...

5
Sunset @ Scripps Pier (programming.dev)
submitted 1 year ago* (last edited 1 year ago) by hlfshell to c/[email protected]
 

What is a San Diego community without an appreciation for our sunsets?

Pic taken on my phone - was biking around and made my way to the "off ramp" next to Scripps Pier for a nice shot.

And a bit earlier at the same spot...

 

Playing multiple co-op campaigns, and every druid across multiple campaigns experience, randomly, unexpected deaths when being knocked out of wild shape. Not death saving throws either - straight into full blown d-e-a-d.

Per the rules, it should go from wild shape -> spill over damage being removed from the remaining HP of the druid. Reading the combat logs we see that while enough damage is done to knock the druid out of their wild shape, the remainder is not nearly enough to put them into death saving throws, let alone outright kill them.

Is anyone else encountering this issue?

[–] hlfshell 1 points 1 year ago

vagrant ended up being way easier than I had anticipated, with just minor issues. I even got a premade image published so that teammates don't have to sit through the long install process, just pull a good base image.

Given that I don't know yet (nor do I have access to a machine to test) Docker solutions w/ GUIs for Windows/Mac I'm going to just stick to the VM approach for now unless I have a strong driver to switch it up.

 

In my first attempt at a long form technical post, I talk about a project where I had to use deep reinforcement learning to try and solve a robotics application. It worked ok - the post talks about my struggles, solutions, and what I'd change up in the future.

view more: next ›