firelizzard

joined 2 years ago
[–] firelizzard 0 points 6 days ago

If you actually have deep knowledge in a specialty, then you describe yourself as that specialty. ‘Full stack engineer’ coneys that you don’t have a specialty/are a master of nothing/your skills are _ shaped.

[–] firelizzard 2 points 6 days ago

Experience != expertise or skill. I have never met someone who was actually good at both. Maybe if your backend is just some SQL queries. I am a backend engineer and I’m adequate at front end but I’d never hire someone whose skills were merely adequate unless I thought they had the potential to reach ‘good’.

[–] firelizzard 2 points 2 weeks ago (1 children)

Scripting languages being languages that are traditionally source distributed.

  • Source distributed means you can read the source if it hasn't been obfuscated. OTOH, it is trivial to decompile Java and C# so this isn't a real difference for those languages (which happen to be compiled languages). So it's only relevant for languages specifically compiled to machine code.
  • Source distributed means the recipient needs to install something. OTOH, Java and C#, again.

So the only ways that the distribution mechanism matter are really a difference between How does the distribution mechanism matter beyond that? And even those points are

They tend to be much easier to write

I'm assuming you are not saying "real" languages should be hard to write...

run slower

Objective-C and Go run slower than C and they're all compiled languages. Sure, an interpreter will be slower than a compiled language but modern languages aren't simply interpreted (i.e. JIT, etc).

often but not always dynamically typed, and operate at a higher level

There are dynamically typed compiled languages, and high level compiled languages.

It’s not a demeaning separation, just a useful categorization IMO.

Calling one class of languages "real" and another class something else is inherently demeaning. I wouldn't have cared enough to type this if you used "compiled vs scripting" instead of "real vs scripting". Though I disagree with using "scripting" at all to describe a language since that's an assertion of how you use the language, not of the language itself. "Interpreted" on the other hand is a descriptor of the language itself.

As someone who loves C there are lots of languages that seem too limiting and high level, doesn’t mean they aren’t useful tho.

I personally can't stand Java because the language designers decided to remove 'dangerous' features like pointers and unsigned integers because apparently programmers are children who are incapable of handling the risk. On the other hand I love Go. It's high level enough to be enjoyable and easy to write, but if you want to get into the weeds you can.

[–] firelizzard 7 points 2 weeks ago (3 children)

That line is blurring to the point where it barely exists any more. Compiled languages are becoming increasingly dynamic (e.g. JIT compilation, code generation at runtime) and interpreted languages are getting compiled. JavaScript is a great example: V8 uses LLVM (a traditional compiler) to optimize and compile hot functions into machine code.

IMO the only definition of “real” programming language that makes any sense is a (Turing complete) language you can realistically build production systems with. Anything else is pointlessly pedantic or gatekeeping.

[–] firelizzard 2 points 2 weeks ago (1 children)

Malboge and brainfuck are also Turing complete. Hell, magic the gathering is technically Turing complete. Yet for some reason no one uses them for production systems… A real programming language is something you can realistically use to create production software, not just something that’s Turing complete.

Also (source):

You can encode Rule 110 in CSS3, so it's Turing-complete so long as you consider an appropriate accompanying HTML file and user interactions to be part of the “execution” of CSS.

So unless you have a different source, CSS is not Turing complete by itself. CSS+HTML is - if you allow “user interactions” which IMO disqualifies it.

[–] firelizzard 1 points 3 weeks ago

Most* sane* programming languages are easy to

[–] firelizzard 2 points 3 weeks ago* (last edited 3 weeks ago) (3 children)

Who says Python isn’t a real programming language? Do they mean it in the same way as “real men prefer X”? That’s an opinion, though an idiotic one. Because if they mean it in the “CSS isn’t a programming language” sense, that’s factually wrong (about Python).

[–] firelizzard 1 points 3 weeks ago

I’d rather use a language that doesn’t treat me like an incompetent child, removing unsigned ints because “they’re a source of bugs”.

[–] firelizzard 6 points 3 weeks ago

Or use a statically typed language that’s actually modern instead of C

[–] firelizzard 1 points 4 weeks ago (1 children)

Why? In my experience using a real debugger is always the superior choice. The only time I don’t is when I can’t.

[–] firelizzard 1 points 1 month ago (1 children)

Huh? Main file? Do you mean main package? A module can contain an arbitrary number of main packages but I don’t see how that has anything to do with this post. Also are you saying modules are equivalent to classes? That may be the strangest take I’ve ever heard about Go.

[–] firelizzard 1 points 1 month ago* (last edited 1 month ago) (3 children)
 

As a senior developer, I don't find copilot particularly useful. Maybe it would have been more useful earlier in my career, but at this point writing a prompt to get copilot to regurgitate useful code and massaging the resulting output almost always takes as much or more time as it would for me just to write whatever it is I need to write. If I am able to give copilot a sufficiently specific prompt that it can 'solve' my problem for me, I already know how to solve the problem and how to write the code. So all I'm doing is using copilot as a ghost writer instead of writing it myself. And it doesn't seem to be any faster. The autocomplete features are net helpful because they're actually what I want often enough to offset the cost of reading the suggestion and deciding if it's useful. But it's not a huge difference (vs writing it myself) so that by itself is not sufficiently useful to justify paying the cost myself nor sufficient motivation to go to the effort of convincing my employer to pay for it.

 

I exclusively use Visual Studio Code for editing code. I primarily work with Go, and a little bit with JavaScript/TypeScript, but I need to do some C# work.

I have no interest in using Microsoft's proprietary C# Dev Kit or dealing with their licensing terms. What capabilities am I losing? The marketing materials for the dev kit talk about a lot of stuff that appear to be features of the open source C# extension, so it's unclear which features are actually exclusive to the dev kit.

 

Why is crypto.subtle.digest designed to return a promise?

Every other system I've ever worked with has the signature hash(bytes) => bytes, yet whatever committee designed the Subtle Crypto API decided that the browser version should return a promise. Why? I've looked around but I've never found any discussion on the motivation behind that.

112
submitted 7 months ago by firelizzard to c/programming
 

Not sure if this is the right community, but I didn't see a general one. What search engine do you use? Besides Google increasingly spying on its users, the quality of its search results seems to have gotten significantly worse over the last decade. What search engine(s) do you use?

 

I have a subscription to Nature but most of the articles are totally beyond me. I’m thinking of switching to a comp-sci specific journal. I’m mainly interested in compiler design and implementation of JIT compilers and VMs like JVM and .NET.

 

I am a self-taught programmer and I do not have imposter syndrome. I have a degree in electrical engineering and when I thought that was going to be my career I did have imposter syndrome, so I'm not immune. I wonder if there's a correlation. It seems that many if not most professionals suffer from imposter syndrome; I wonder if that's related to the way they learned.

When I say self-taught, I don't mean I never took a class, I mean the majority of my programming skill was learned by doing/outside of classes. I took a Java class in high school that helped me graduate from procedural languages to OOP, and I took classes in college but with few exceptions the ones that were practical (vs theoretical) covered material I already knew.

 

My last job was at a company that designed and built satellites to order. There was a well defined process for this, and systems engineers were a big part of it. Maybe my experience there is distorting my perspective, but it seems to me that any sufficiently complex project needs to include systems engineering, even if the person doing that is not called a systems engineer. Yet as far as I can tell, it isn't really a thing in the software industry. When I look at job postings and "about us" blog posts about how a company operates, I don't see systems engineering mentioned. Am I just not seeing it, is it called something else, or is the majority of the industry somehow operating without it?

 

I am working on an application that has SDKs in multiple languages. Currently Java, JavaScript, Dart, and Go, but ultimately we'd like to have an SDK for every major language. Our primary test suites are written in Go, which means our other SDKs are not well tested. I do not want to write or maintain test suites in four or ten different languages.

What I would like to do is choose a language to write the tests in, define a test harness interface, implement that test harness for each SDK, and write the tests using that harness. Of course I could do this with RPC/HTTP/etc but that would add significant complexity. I'd prefer to write the tests in a language that has a meaningful degree of interop/FFI with most of the major languages. Lua comes to mind, since it seems like someone has built a Lua interpreter for basically every language in existence, but I have very little Lua experience and I have no idea how painful it might be to do this in Lua. I am open to other suggestions besides interop/FFI and RPC, though I don't want to take the approach of creating test templates and generating the tests in each language. I've done things like that and they're a pain to maintain.

 

I am not hating on Rust. I am honestly looking for reasons why I should learn and use Rust. Currently, I am a Go developer. I haven’t touched any other language for years, except JavaScript for occasional front end work and other languages for OSS contributions.

After working with almost every mainstream language over the years and flitting between them on a whim, I have fallen in love with Go. It feels like ‘home’ to me - it’s comfortable and I enjoy working with it and I have little motivation to use anything else. I rage every time I get stuck working with JavaScript because dependency management is pure hell when dealing with the intersection of packages and browsers - by contrast, dependency management is a breeze with Go modules. I’ll grant that it can suck when using private packages, but I everything I work on is open.

Rust is intriguing. Controlling the lifecycle of variables in detail appeals to me. I don’t mind garbage collectors but Rust’s approach seems far more elegant. The main issue for me is the syntax, specifically generic types, traits, and lifetimes. It looks just about as bad as C++'s template system, minus the latter’s awful compiler errors. After working almost exclusively with Go for years, reading it seems unnecessarily demanding. And IMO the only thing more important than readability is whether it works.

Why should I learn and use rust?

P.S.: I don’t care about political stuff like “Because Google sucks”. I see no evidence that Google is controlling the project. And I’m not interested in “Because Go sucks” opinions - it should be obvious that I disagree.

 

I've started noticing articles and YouTube videos touting the benefits of branchless programming, making it sound like this is a hot new technique (or maybe a hot old technique) that everyone should be using. But it seems like it's only really applicable to data processing applications (as opposed to general programming) and there are very few times in my career where I've needed to use, much less optimize, data processing code. And when I do, I use someone else's library.

How often does branchless programming actually matter in the day to day life of an average developer?

 

I am an experienced developer, but not an experienced manager. I'd prefer if organizing tasks was not my responsibility, but I work at a small company and no one else is inclined to do it. How do you organize miscellaneous tasks when using a task management system such as Jira? We're using GitLab, but it has the same basic features, such as epics, milestones, tasks, and subtasks.

I don't want to have miscellaneous tasks floating around in the ether, because things like that tend to get lost. But an epic is supposed to have a well-defined end goal, right? A good epic is something like "Implement this complex feature" or "Reach this level of maturity" - not "Miscellaneous stuff".

The majority of the work we do fits fairly clearly into specific goals, such as "Release the next version of feature." But what about bug fixes and other random improvements and miscellaneous tasks? How do you keep those organized?

view more: next ›