Amazing what you can get done if you have a functioning government by and for the people.
atheken
You can probably use Audacity to generate a tone. The more you can find out about the specs of the original speaker, the easier it’ll be to generate a new tone.
And you can do the previous years of the coding challenge at any time.
I took some time off, and this was a good source of solving “real” problems, rather than trying to write something to optimize for l33tcode (which, is fine… just not a good measure for typical software engineering responsibilities, IMO).
Like I said in my other comment, I think people tend to lump all of MSFT's activities into the same bucket. DevDiv has always seemed pretty decent, and I am usually reminded of this comic when people talk about MSFT's "shady" activities.
Everything is temporary. If we were talking about a niche language, I might worry a little bit that it could just lose momentum and die. But TS is a juggernaut. The only way typescript “dies” is if JS integrates enough of its features to make it redundant.
Besides that, if Oracle managed to allow Java to continue to grow and flourish, I have confidence that MS can do at least that well. I also think lumping all of MS’s products into the same boat is a mistake. They have been pretty good stewards of their languages for decades.
It’s necessarily complexity that is easily encapsulated in methods.
If those methods are under test to verify their behavior, trivial typos can be detected instantly, without adding another dialect and more conceptual overhead to a project.
If those methods are not under test, then there’s a tiny bit of help by using a DSL if it can be compile-time checked.
I used to be full on the ORM train. Now I’m a little less enthusiastic. What I actually think people need most of the time is something closer to ActiveRecord. Something that can easily map a result set into a collection of typed objects. You still generally write parameterized SQL, but the work of translating a db decimal into the correct target type on a record object in your language is handled for you (for example). In .net, Dapper is a good example.
I also think most people overemphasize or talk about how other programmers “suck at SQL” waaayy too much.
IMO, for most situations, these are the few high-level things that devs should be vigilant about:
- parameterize all sql.
- consider the big-o of the app-side lookup/write methods (sometimes an app join or pulling a larger set and filtering in memory is better than crafting very complex projections in sql). This is a little harder to analyze with an ORM, but not by much if you keep the mappings simple and understand the loading semantics of the ORM.
- understand the index coverage of queries and model table keys properly to maintain insert performance (monotonically increasing keys).
- stop fixating on optimizing queries that run in a few seconds, a few times a day. Optimize the stuff that you run on every transaction - if you need to.
On most of those points, if you don’t have aggregate query counts/metrics on query performance on your clusters, starting to get cute with complex queries is flying blind, and there’s no way to prioritize what to optimize.
For the vast majority of cases, simple, obvious selects that don’t involve special db features are going to do the job for most applications. When the database becomes a bottleneck, there are usually much more effective ways to handle them than to try to hand optimize all the queries.
Lastly, I have a little bit of a theory that part of the reason people do/do not like looking at SQL in code is because it’s a hard context switch from one language to another, often requiring the programmer to switch to “stringly-typed” mode, something we all learn causes huge numbers of headaches in our first few months of programming. Some developers accept that there’s going to be different languages/contexts and not all of them are going to be as fluent or familiar, but accept that this is par for the job. Others recoil from the unfamiliar and want to burn it down. IMO, the former attitude is a lot more productive.
My running joke, after four different friends told me they were using ChatGPT to help them with it, is that the language is so hard to learn that we invented an entirely new class of AI to help.
It’s a joke, of course, but it does have some “surprising” syntax, since some stuff is whitespace sensitive, and there are subtle differences between () and [] and [[ ]], for example. All of that’s due to the long history of shell behavior, so I don’t necessarily blame bash.
Unicode is thoroughly underrated.
UTF-8, doubly so. One of the amazing/clever things they did was to build off of ASCII as a subset by taking advantage of the extra bit to stay backwards compatible, which is a lesson we should all learn when evolving systems with users (your chances of success are much better if you extend than to rewrite).
On the other hand, having dealt with UTF-7 (a very “special” email encoding), it takes a certain kind of nerd to really appreciate the nuances of encodings.
I'm a go n00b, but since the source code is available, I figured I'd look. TL;DR: it probably uses the shell's default globbing resolution to produce the file list.
The generate command iterates over the internal files, but I can't find exactly how GoFiles is populated.
You can probably learn what it's doing by running go generate -n
or go generate -x
, and I think you can also explicitly call go generate
with a file pattern list, which would give you this control.
Otherwise, I think you can include more than one magic comment in a single file, so if you have some dependant generators, these could be placed in the same file, sequentially, and you'd get the expected result.
Another alternative would be to try renaming the relevant files so that they sort the way you want them to run, lexicographically.
It sounds like you might be developing an app with an evolving schema.
You should consider adopting a db migrations framework and having a task that can apply them to a dev database to bootstrap/upgrade the DB. If you take this route, you won’t need to even commit the db file, and you will be able to easily seed/replicate the DB schema when you deploy it.
Additionally, SQLite is awesome, and if you are actually storing some data, you can do some stuff where you can make tables that are backed by structured text files (like CSV), so there are ways to store data in text while still getting the benefits of having a SQL interface
Large binary files will start to expand the git repo, but if they’re relatively small, and the update frequency is somewhat limited, it won’t really be an issue. If you are concerned about it, you can look into
git-lfs
, but it might not matter much.EDIT: Also, since the "git is bad with binary files" is such a pervasive myth, I decided to check into it a little bit. A couple things:
Git uses "delta compression" when packing/storing/transmitting files. This allows common chunks to be stored once and then reassembled when you check out a file. It does this for "normal" files reguardless of whether they are text or binary until they are considered "big", at which point, they are stored as a single unit in the pack file. What's "big"? By default, 512MB.
You can go pretty deep on the internals of the way that packfiles are constructed in git, but more than likely, a file that's a few MB is still going to work fine, and you will get some storage reduction when you commit it.
You should configure automatic gc to periodically repack stuff so that the actual
.git
repo doesn't balloon, but again, even if you're talking about a few GB, it's still not much on modern systems.