sebastiancarlos

joined 1 year ago
[–] [email protected] 2 points 1 month ago* (last edited 1 month ago)

Yes, I’ll host the source code on GitHub. I could consider mirroring it on Sourcehut if there’s enough interest, but I prefer the PR and Issues workflow on GitHub for collaboration. Plus, more people tend to have GitHub accounts than GitLab or Sourcehut, which makes it easier for contributors.

I get the concern about Microsoft, and while I’m not a fan of the company, GitHub has advantages that are hard to beat, especially for community reach. As for OpenAI potentially using the code, personally I don’t mind if my own code gets used for AI training.

I’ll be using an MIT license, in case you're curious. Everyone is free to mirror it anywhere.

[–] [email protected] 1 points 1 month ago

An existing FOSS time tracking software I like is Timewarrior (CLI)

[–] [email protected] 11 points 1 month ago (1 children)

It's Exclidraw (dark mode)

[–] [email protected] 18 points 1 month ago* (last edited 1 month ago)

Totally understand your perspective, and I’m not here to push back against it. You’ve got a valid point.

I’ll just add that there are already commercial tools that do similar things to what I’m building. It’s interesting to consider how perceptions might shift if a tool were released by a company rather than a solo developer. Sometimes the context influences how a tool is interpreted, even if the underlying functionality remains the same. For what it’s worth, I have no commercial intent behind this.

[–] [email protected] 10 points 1 month ago* (last edited 1 month ago)

Exactly! My tool is designed to work with existing time-tracking tools by processing their output. You can think of it as a post-processor that helps clean up and format the data.

Since there are already plenty of time-tracking tools out there (both CLI and GUI), I wanted something that could act as a flexible add-on for them.

[–] [email protected] 21 points 1 month ago (10 children)

Hey, thanks for the comment. I get that it might be used for something shady, but that’s not the intention. The primary goal is to clean up raw time-tracking data into a format that’s easy to present to clients or supervisors, especially for contexts when small gaps or irregularities should be absent.

I imagine most professionals aren’t expected to account for every single minute of their workday. For example, if you’re switching tasks or taking short breaks. It’s more about reporting general productivity or overall progression of tasks, not trying to inflate hours.

Anyone aiming for 'time fraud' could probably find easier methods. My focus is to make life easier for people who already track their work but want cleaner, more digestible reports.

Appreciate the feedback though, helps me make sure the use case is clear! :)

 

It's almost done (it would take one or two weeks to clean it up for FOSS release). It's a CLI tool. It works great for my use case, but I'm wondering if there's any interest in a tool like this.

Say you have a simple time-tracking tool that tracks what you do daily. The only problem is that there are gaps and whatnot, which might not look nice if you need to send it to someone else. This tool fixes pretty much all of that.

Main format is a JSON with a "description", and either "duration" or a "start"/"end" pair. It supports the Timewarrior format out of the box (CLI Time tracking tool).

 
 
120
submitted 3 months ago* (last edited 3 months ago) by [email protected] to c/[email protected]
 

Either self-hosted or cloud, I assume many of you keep a server around for personal things. And I'm curious about the cool stuff you've got running on your personal servers.

What services do you host? Any unique stuff? Do you interact with it through ssh, termux, web server?

[–] [email protected] 4 points 4 months ago (1 children)

And what's your workflow when working with lots of files in projects with fish?

 

Hey,

As an avid CLI user, I always aimed to master non-interactive tools to perform most of my work, given that they are easy to use, create, extend, and connect.

However, I found myself dealing with software projects with many files (mostly under the yoke of corporate oppression; an ordeal which I endure to sustain myself, as most of those reading me do, and therefore I will not go further into this topic) and started to hit the limits of non-interactive tools to find and edit files. Indeed, I could go faster if I followed the temptation of monstrous IDEs, as I did in my innocent past.

I did not despair, as naturally I heard of the usefulness of interactive fuzzy finders such as fzf. After spending an afternoon evaluating the tool, I concluded that it indeed increases the complexity of my workflow. Still, this complexity is managed in a sensible way that follows the UNIX tradition.

I now ask you two general questions:

  • Did you reach similar conclusions to me and decide to use interactive fuzzy finders to solve working on software projects with many files?
  • If you use fzf or similar tools, what can you tell me about your workflow? Any other third-party tools? Do you integrate it into your scripts? Any advice that you can give me out of a long time of experience using the tool that is not easily conveyed by the documentation?

I also ask this very specific question:

  • The one part of fzf which I found missing was a way to interact with the results of grep, and to automatically place the selected file(s) in the prompt or an editor. For that, I created the following two commands. Do you have a similar workflow when you want to bring the speed of fuzzy finding to grep?
#! /usr/bin/env bash

# gf: grep + fzf
# basically a wrapper for 'grep <ARGS> | fzf | cut -f 1 -d:'

# print usage on -h/--help
if [[ "$1" == "-h" || "$1" == "--help" ]]; then
    echo "Usage: gf <grep-args>"
    echo
    echo "~~~ that feel when no 'gf' ~~~"
    echo
    echo "- Basically a wrapper for 'grep <ARGS> | fzf | cut -f 1 -d:'"
    echo "- Opens fzf with grep results, and prints the selected filename(s)"
    echo "- Note: As this is meant to search files, it already adds the -r flag"
    echo
    echo "Example:"
    echo "  $ nvim \`gf foobar\`"
    echo "  $ gf foobar | xargs nvim"
    exit 0
fi

# run grep with arguments, pipe to fzf, and print the filename(s) selected
custom_grep () {
    grep -E --color=always --binary-files=without-match --recursive "$@"
}
remove_color () {
    sed -E 's/\x1b\[[0-9;]*[mK]//g'
}
custom_fzf () {
    fzf --ansi --height ~98%
}
grep_output=$(custom_grep "$@")
if [[ "$?" -ne 0 ]]; then
    exit 1
else
    echo "$grep_output" | custom_fzf | remove_color | cut -f 1 -d:
fi
#! /usr/bin/env bash

# ge: grep + fzf + editor
# basically a wrapper for 'grep <ARGS> | fzf | cut -f 1 -d: | $EDITOR'

# print usage on -h/--help
if [[ "$1" == "-h" || "$1" == "--help" ]]; then
    echo "Usage: ge <grep-args>"
    echo
    echo "- Basically a wrapper for 'grep <ARGS> | fzf | cut -f 1 -d: | \$EDITOR'"
    echo "- Opens fzf with grep results, and edits the selected file(s)"
    echo "- Note: As this is meant to search files, it already adds the -r flag"
    echo "- Note: Internally, it uses the 'gf' command"
    echo
    echo "Example:"
    echo "  $ ge foobar"
    exit 0
fi

# takes output from 'gf' and opens it in $EDITOR
grep_fzf_output=$(gf "$@")
if [[ -n "$grep_fzf_output" ]]; then
  $EDITOR "$grep_fzf_output"
fi

Have a wonderful day, you CLI cowboys.

[–] [email protected] 2 points 5 months ago (1 children)

Unlike a password manager that just logs you in, Beachpatrol can run any automation task, like checking your email, downloading files, or filling out forms. You have to create Playwright scripts for these tasks and run them from a shell command. There is an example script already in the commands folder, which you can run with the command beackmsg smoke-test. The sky is the limit, basically.

[–] [email protected] 4 points 5 months ago* (last edited 5 months ago) (1 children)

Cool project! I'll check it out.

Regarding userscripting, from the F.A.Q.:

Why use an external automation tool (Playwright) instead of a browser extension?

While Beachpatrol allows to control the browser from both the OS and from a browser extension, our priority was the OS. Therefore, something like Playwright was the natural choice.

Furthermore, while controlling the browser from an extensions is possible, Manifest v3 removed the ability to execute third-party strings of code. Popular automation extensions like Greasemonkey and Tampermonkey could also be affected by Manifest v3. The alternative is to embed the code into the extension, but that would requires re-bundling the extensions after every change. Other tricks do exist to make this approach work, and there is some hope for future Manifest v3 solutions, but this path is certainly tricky.

It is more likely that Selenium and related tools will continue to work in the foreseeable future given the business demand for traditional browser testing.

[–] [email protected] 2 points 6 months ago* (last edited 6 months ago)

Makes sense and you're probably right, but I'll tell you why I didn't do it that way:

  1. I just did what came first to me
  2. I like the idea of the API defining the project structure
  3. When adding a new package manager, if that ever happens, I would like to see all other implementation of the same functionality on the same file, for help and inspiration
[–] [email protected] 12 points 6 months ago* (last edited 6 months ago)

Tbh these scripts are for my personal use, written in the way that makes sense for me. I only open sourced it as a joke an as an example of how reinventing your own wheel is not that hard sometimes, and comes with the benefit of doing just what you need it to do.

Actually I was thinking of adding a sysget fallback, as I might need to do some debian/fedora hacking soon.

view more: next ›