Azure Artifacts has been great for private stuff. Have you tried Github packages yet? Works well enough for public packages. I don't know what your use case is, but what prevents you from using nuget.org?
Yeah the problem isn't the veracity of the logs, it's providing a mechanism for third parties of proving that the sequence of events in your log hasn't been tampered with after the fact
Yeah it's not ideal, but you only need to pay the gas cost when you need to prove integrity and that's alot cheaper than having to constantly be in sync with the world.
Audit logs and Access control paper trails.
Security event logging has to be:
- Broadly accessible
- Write-protected
- offering some proof of completeness.
These three requirements are tricky and often conflicting. Block-chain might be an inefficient way to achieve these, but the glove does fit quite neatly.
Logistical paperwork
- Purchase Orders/Invoices and packing slips
- Waybills/Bills of lading and CMR's
These kinds of documents require multiple stages of matching and approval by untrusted 3rd parties. There are dozens of ecosystems of interacting systems that support processing these documents, but most people still use paper. Paper is more reliable when you need to deliver a container full of diapers from Poland to North Sudan. It's more reliable but incredibly prone to fraud and forgery. Having all of these approvals and transactions tracked on a blockchain and letting different systems interact with the same chain, would make it possible without each ERP having a rest API to each other ERP.
Man, I have to agree. Your write up reflect my experience with Azure Functions in a mid-large sized application way more than the post. Fantastic
Hey, I've worked with ML.NET before and it's not the best framework for C#, but it is capable. I'm having trouble understanding what the goal of your model would be. Is it just text prediction or classification? ML.NET and pretty much any ML framework do need some experience with machine learning methods and models to achieve good results. Is this your first time doing something with ML?
I would look into the SciSharp stack a great collection of AI/ML framework bindings for common frameworks I would recommend looking into Torch.Net and Keras in particular.
This is a bit of a narrow view of a very vague term. Having worked with many different sizes of organisations i can say that the responsibilities of whomever is labelled CTO are completely arbitrary. The only thing you can establish is that they are the person accountable for the technology decisions.
Sometimes that's a legacy developer, sometimes that's the first sys-admin.
Sometimes it's the VP of engineering.
Sometimes that's the person that maintains the best relationships with software vendors.
Sometimes it's the person that was hired externally to explain the tech to the CEO and let's them make informed executive decisions.
Sometimes it's just a public figure used to promote the org and maybe do DevRel.
Sometimes it's the Architect that designed the ecosystem.
Sometimes it's the ancient programmer that has kidnapped the entire codebase so that no-one else can sanely work on it.
Sometimes it's a six sigma type that setup the ticketing system, PRs and the release process.
At any size, the CTO is whatever the org needs him to be at that point.
Explain to me how this isn't code golfing.
It depends... The myriad of reasons to have a dedicated release day have often to do with synchronizing marketing, support and the other departments.
My question is what does QA mean for your org? Does it mean defect detection? Testing? Acceptance? Those are all different things. The teams i see that are able to release every day have a strict separation of Quality Control and Functional Acceptance. QC used to detect defects and regression and is handled by highly automated processes accounted for by engineering. Then acceptance is done by a dedicated product/quality team that figure out if the new functionality actually is built to spec and solves the customer problems. This also involves blogs, documentation, customer contact, release notes, tutorials and workshop for the support team etc... This second part is handled by feature flagging, so that the product teams can bèta test, run a limited release and track adoption.
It really depends on what kind of software youre running and what your relationship is towards the end user and the rest of the org. Something that is the same in all cases is that your requirements and acceptance criteria need to be very clear from the start and regression resting needs to be fully automated.
Every engine is going to come with engine specific problems. You will also come against many general game development problems, for which the engines have come up with many different creative solutions.
I can't make it any simpler for you. You will waste a bunch of time learning stuff. The only way to avoid that is literally building your own engine that conforms to your expectations and assumptions, because noone else can do that.
There are so many invisible boring-ish problems. Ui, scaling, networking, instancing, level changing, loading screens, even scheduling etc. You need to learn to love the boring stuff, because it comes at a 10-1 ratio towards the fun-ish creative problems.
However it's best to start wasting that time today than next week.
The way I managed to get an intuition about the language is just building classic boardgames. Checkers, chess, diplomacy and go are great exercise to start working with lists and dimensions, declaring multiple predicates and have them interact with each other. Changing the state of the program and using the traces to branch out decisions. Remember to keep track of your interpreter. Different interpreters act in surprising ways. The order of operations of SWI is different than Tau.
After that, the honest truth is that Prolog isn't widely used enough to have a 'modern standard approach'. The best way is to treat it like any other embedded subsystem: light and concise scripts embedded in a grown-up language.
Yeah you're not wrong, that would be more efficient. Again a blockchain is not an efficient way to do it. But it would be effective.
In practice audit logs are used by and for auditors. Non-technicals that need evidence that would hold up to argument. Yes you could send your logs to a third party. Now you have to prove that third parties trustworthiness twice a year to the standards of each legal entity you operate in. And lawyers are more expensive than blockchain devs haha :p
Having a private blockchain that you can share with several changing parties that can subscribe to it. Without having to update anything about your infrastructure is a benefit.
Even though I've lived through several iso 27001 certifications, I'm still walking on thin ice when I say that it would probably easier to explain the blockchain in practice than any other proof of completeness method. Because the public is more aware of it. On the other hand the public is also more skeptical of crypto so it could also backfire :p