Java
My take on a modern Java solution (parts 1 & 2).
spoiler
package thtroyer.day1;
import java.util.*;
import java.util.stream.IntStream;
import java.util.stream.Stream;
public class Day1 {
record Match(int index, String name, int value) {
}
Map numbers = Map.of(
"one", 1,
"two", 2,
"three", 3,
"four", 4,
"five", 5,
"six", 6,
"seven", 7,
"eight", 8,
"nine", 9);
/**
* Takes in all lines, returns summed answer
*/
public int getCalibrationValue(String... lines) {
return Arrays.stream(lines)
.map(this::getCalibrationValue)
.map(Integer::parseInt)
.reduce(0, Integer::sum);
}
/**
* Takes a single line and returns the value for that line,
* which is the first and last number (numerical or text).
*/
protected String getCalibrationValue(String line) {
var matches = Stream.concat(
findAllNumberStrings(line).stream(),
findAllNumerics(line).stream()
).sorted(Comparator.comparingInt(Match::index))
.toList();
return "" + matches.getFirst().value() + matches.getLast().value();
}
/**
* Find all the strings of written numbers (e.g. "one")
*
* @return List of Matches
*/
private List findAllNumberStrings(String line) {
return IntStream.range(0, line.length())
.boxed()
.map(i -> findAMatchAtIndex(line, i))
.filter(Optional::isPresent)
.map(Optional::get)
.sorted(Comparator.comparingInt(Match::index))
.toList();
}
private Optional findAMatchAtIndex(String line, int index) {
return numbers.entrySet().stream()
.filter(n -> line.indexOf(n.getKey(), index) == index)
.map(n -> new Match(index, n.getKey(), n.getValue()))
.findAny();
}
/**
* Find all the strings of digits (e.g. "1")
*
* @return List of Matches
*/
private List findAllNumerics(String line) {
return IntStream.range(0, line.length())
.boxed()
.filter(i -> Character.isDigit(line.charAt(i)))
.map(i -> new Match(i, null, Integer.parseInt(line.substring(i, i + 1))))
.toList();
}
public static void main(String[] args) {
System.out.println(new Day1().getCalibrationValue(args));
}
}
Bill is a liability.
Project Panama is aimed at improving the integration with native code. Not sure when it will be "done", but changes are coming.
Nice video about it here : https://youtu.be/cZLed1krEEQ
Tldw: US DOS version actually has 2 separate impossible jumps on a level that aren't present on the European DOS or NES versions.
Wow, that looks really nice!
I use Lua for PICO-8 stuff and it works well enough, but certain parts are just needlessly clumsy to me.
Looks like TIC-80 supports wren. Might have to give that a try sometime!
Yep, absolutely.
In another project, I had some throwaway code, where I used a naive approach that was easy to understand/validate. I assumed I would need to replace it once we made sure it was right because it would be too slow.
Turns out it wasn't a bottleneck at all. It was my first time using Java streams with relatively large volumes of data (~10k items) and it turned out they were damn fast in this case. I probably could have optimized it to be faster, but for their simplicity and speed, I ended up using them everywhere in that project.
I've got so many more stories about bad optimizations. I guess I'll pick one of those.
There was an infamous (and critical) internal application somewhere I used to work. It took in a ton of data, putting it in the database, and then running a ton of updates to populate various fields and states. It was something like,
- Put all data in x table with batch y.
- Update rows in batch y with condition a, set as type a. (just using letters as placeholders for real states)
- Update rows in batch y that haven't been updated and have condition b, set as type b.
- Update rows in batch y that haven't been updated and have condition c, set as type c.
- Update rows in batch y that have condition b and c and condition d, set as type d.
- (Repeat many, many times)
It was an unreadable mess. Trying to debug it was awful. Business rules encoded as a chain of sql updates are incredibly hard to reason about. Like, how did this row end up with that data??
Me and a coworker eventually inherited the mess. Once we deciphered exactly what the rules were and realized they weren't actually that complicated, we changed the architecture to:
- Pull data row by row (instead of immediately into a database)
- Hydrate the data into a model
- Set up and work with the model based on the business rules we painstakingly reverse engineered (i.e. this row is type b because conditions x,y,z)
- Insert models to database in batches
I don't remember the exact performance impact, but it wasn't markedly faster or slower than the previous "fast" SQL-based approach. We found and fixed numerous bugs, and when new issues came up, issues could be fixed in hours rather than days/weeks.
A few words of caution: Don't assume that building things with a certain tech or architecture will absolutely be "too slow". Always favor building things in a way that can be understood. Jumping to the wrong tool "because it's fast" is a terrible idea.
Edit: fixed formatting on Sync
I might be wrong, but the 2nd case looks like an anti pattern, the loop switch sequence .
The last case looks the most readable to me. Always start with that unless there's a clear reason not to (eg inefficient multiple nested loops).
I think that's a fair argument. PICO-8 definitely could be called a primitive IDE. I think it's closer to being a primitive game engine with so much of its focus being on graphics and sound tooling.
While you can code simple things within PICO-8, I've found that as I've built bigger things, I work better in an outside editor, even if it only gets me smaller fonts, splitable windows, vim bindings, limited linting, and somewhat broken code completion.
This isn't a criticism of PICO-8 as an environment. I think there are a lot of strengths in its simplicity, especially for beginner coders.
I tend to make a distinction between a customizable editor with some support for a language (like vim+plugins) vs a dedicated all-in-one tool that fully understands the language and environment (IDE). PICO-8 is hard to place on that spectrum given it's an all-in-one tool, but switching to a modified editor gives you more features.
This is a very strange article to me.
Do some tasks run slower today than they did in the past? Sure. Are there some that run slower without a good reason? Sure.
But the whole article just kind of complains. It never acknowledges that many things are better than they used to be. It also just glosses over the complexities and tradeoffs people have to make in the real world.
Like this:
Windows 10 takes 30 minutes to update. What could it possibly be doing for that long? That much time is enough to fully format my SSD drive, download a fresh build and install it like 5 times in a row.
I don't know what exactly is involved in Windows updates, but it's likely 1) a lot of data unpacking, 2) a lot of file patching, and 3) done in a way that hopefully won't bork your system if something goes wrong.
Sure, reinstalling is probably faster, but it's also simpler. If your doctor told you, "The cancer is likely curable. Here's the best regimen to get you there over the next year", it would be insane to say, "A YEAR!? I COULD MAKE A WHOLE NEW HUMAN IN A YEAR!" But I feel like the article is doing exactly that, over and over.
I noticed you don't have a build/dependency management tool set up. I find having one makes project setup and producing builds much easier, for myself and others.
If you're interested, I might be able to add Maven to it and submit a PR. :)