lxpw

joined 1 year ago
MODERATOR OF
[–] [email protected] 3 points 9 months ago

I got approved a few months ago and will have solar panels installed in the next few months. I am happy to hear there will be a second stage as I want to get my windows replaced next. If AB could elect a competent government I would also replace my mid-efficient furnace.

 

The first stable release of OpenTofu (the fork of Terraform) has now been released. This release is lagging behind the current 1.6.6 release of Terraform, but it is a big first step. This release is backwards compatible with Terraform 1.6.0 and includes a few new features.

The big new features:

Change log

 

Amazon has finished setting up their second Canadian AWS region. This is big news for anyone in western Canada as regional public cloud coverage has been non-existent, until now. Previously, your only options had been eastern Canada (Montreal) and eastern Canada (Toronto). This is also big news for data sovereignty on AWS. Previously, you didn't have a option for a Canadian disaster recovery region. AWS only had a single Canadian region (ca-central-1), so your DR site would need to be in another country.

To use this region, you will need to enable the region under your billing dashboard as new regions are not enabled by default. This region has 3 AZs, which would be required if you do proper clustering. For the longest time, the ca-central-1 (Montreal) region only had 2 AZs. I remember getting asked in a job interview how many AZs ca-central-1 had and I correctly answered 2. They were convinced all regions had a minimum of 3 AZs and I got docked points. I am still fuming.

Warning: Advanced technical networking and location ramblings below

The new region has 2 - 100G connections to the Calgary Internet Exchange Point IXP (YYCIX). They terminate at Equinix CL-3 and DataHive. I suspect the first AZ is the standalone AWS datacentre just off of Glenmore east whose location had leaked. The second one is probably located west downtown (just outside of the ~~100~~ 25 year flood plain) close to DataHive. The DataHive datacentre is tiny so co-locating an entire Amazon AZ is not happening. Downtown Calgary has plenty of cheap office space for a datacentre conversion.

The third AZ is probably co-located at Arrow DC2 south of the airport or eStruxture CAL-2 up past the airport. Co-location would explain why there isn't a third connection to YYCIX.

As this region is directly connected to YYCIX, this means traffic will not be routing down to the Seattle IXP, unless you yourself are on a local ISP (Shaw) that doesn't connect to YYCIX yet. I didn't believe this rumour was still true, but I did some digging and I am not seeing a YYCIX connection registered for Shaw.

[–] [email protected] 1 points 1 year ago

I think it is the best option of all the possible choices I have seen and I can see how the 'Open' they tacked on is required for finding the project through searches. Adoption would have been be awful if they stuck with just 'Tofu'. Adoption of tofu as a meat substitute could have improve, though.

21
OpenTF has been renamed OpenTofu (www.linuxfoundation.org)
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 

It looks like the 'TF' part of OpenTF was too similar to Terraform and they have come up with a new name for the project. In addition, the project is now a part of the Linux Foundation and they have a new website.

https://opentofu.org/

8
AWS is having issues (health.aws.amazon.com)
 

us-east-1 and us-west-2 regions are experiencing networking issues and it also having an affect on a number of other cloud services that rely on those regions.

The number of AWS services this is affecting is growing and will probably affect the majority of their services to some degree.

It isn't a full network outage, but instead a sporadic one (too much load?). As in, one ECS task will be able to register itself with the application load-balancer, while another one will not. If you have an automated environment, this is causing rolling failures right now.

This is impacting one of my clients greatly as those are their primary and DR regions. We are considering deploying DR in a 3rd region, but it could take hours to replicate their database.

 

The Github repository for the community fork of Terraform (called OpenTF) has been made public. If you use any third-party tooling (SpaceLift, Scalr, Env0, Terraspace, Terragrunt, Atlantis, Digger, etc.) you will probably want to plan a switch to using OpenTF instead of Terraform to remain license compliant. Well, it is actually more about the third-party tool's compliance. From this point forward, their documentation can't tell you to install a version of Terraform higher then 1.5.5. You will start to see them transition over to suggesting OpenTF instead, once a stable release is available.

OpenTF plans to remain feature compatible with Terraform, but I could see, in the future, new features being added to OpenTF that third-party tool providers require.

I wouldn't compile and use the current OpenTF code for production or even development use yet, but if you wanted to contribute to the project, now is your chance.

https://github.com/opentffoundation/opentf

The first stable release should be coming by October 1st. https://github.com/opentffoundation/opentf/milestone/3

[–] [email protected] 2 points 1 year ago

The Terrawork (Terragrunt) people have posted their latest response. It looks like multiple companies have banded together and are fully behind forking Terraform, if required.

https://blog.gruntwork.io/the-future-of-terraform-must-be-open-ab0b9ba65bca

 

CloudFormation is the most featureless of the Infrastructure-as-Code template languages. It is miles behind Terraform, Azure ARM/Bicep, and Google Cloud Deployment Manger. I don't think there has been any direct improvement with the language syntax since the introduction of YAML support over a decade ago. The core syntax and functionality of CloudFormation have been frozen for many years and there is no sign that will ever change. From the outside, it appears the CloudFormation team has been under-resourced and left to rot.

Support for new resource types and properties can take up to 2 years to get implemented. If you are tied to using CloudFormation and need support for a new resource type or property, you are left with creating and maintaining custom resource types (Lambda Functions).

All recent language improvements have been in the AWS::LanguageExtensions transform, which is just an AWS manage CloudFormation Macro, and it was only release last September. CloudFormation Macros are Lambda functions that will run against a template before it is processed. They allow you to interpret your own syntax changes and transform the template before deployment.

Before this looping function support, AWS::LanguageExtensions transform didn't contain any functionality that made it compelling to use. If you were already aware of how to extend CloudFormation, you probably already had a collection of CloudFormation Macros that went above and beyond the functionality of the AWS::LanguageExtensions transform.

Currently, if you want to do anything more advanced than what is built-in, you have to create and maintain your own CloudFormation Macros (more Lambda Function). They are a pain to debug, add a lot more complexity, and increase your maintenance workload. Having AWS provide a macros that greatly extends CloudFormation into something usable would be awesome. We just aren't there yet, but this update shows there is some life left in the corpse.

 

I have been researching the current state of Terraform automation and collaboration tools on behalf of a client and this is a new one that has emerged as a possible option. The client needs something to help manage their many pipelines and state files, but they are not big enough to need a full enterprise Terraform management platform such as Spacelift, Scalr, or Env0. Atlantis was on the short list, but it is showing its age and this is looking to be a better product and a good middle ground solution.

With the recent Hashicorp licensing change, this product may also be impacted. The developers claims they are not using any Hashicorp code and are not affected, but their code does execute a terraform command process which might still run afoul of the "embedded" part of Hashicorp's BSL "Additional Use Grant". Since they are also the creators of the first fork of the MPL licensed Terraform code-base, they will surely be under the watchful eyes of Hashicorp's lawyers.

 

The recent change in licensing across all Hashicorp products shows that Hashicorp is not able to or willing to compete with competitors to their enterprise offerings. Even though they officially don't state it, the change is targeted at competitors such as Spacelift, Scalr, and Env0. Those competitors only came to be to fill in gaps that remained after and because of Hashicorp's lacklustre and overpriced Terraform Cloud/Enterprise products.

The Business Source License (BSL) 1.1 is an open source license, but it has additional vague wording designed to prevent competitors from building competing products using the source code. The problem in this situation is that it also extends to additional products produced by the code owner (Hashicorp). This means even an open-source (non-commercial) competitor to the separate Terraform Enterprise product is not allowed to use the Terraform command, Terraform code-base or any other Hashicorp code-base. Anyone who does any form of Terraform automation, that they then provide to their clients for production use, will now need to ensure they are not seen as a competitor to a Hashicorp product.

Spacelift has already tried to reassure their customers that they are going to work on a solution going forward.

Even though Hashicorp claims to be supportive of the spirit of open source software, they aren't supportive of open collaboration and they have been resistant to upstream contributions from the community. This resistance has created an environment where new enhancement toolsets were created then evolved into competing products with their enterprise offering. Now that they have changed their licensing, this will further exacerbate the issues. A fork of the pre-BSL licensed Terraform code-base has already appeared and if it or another fork gets enough support from the community, we could see the official Terraform toolset being replaced as the defacto Infrastructure-as-Code platform in use today.

I myself have created command wrappers and managements to improve on the limitations of the Terraform command and the lack of state file drift management. So I will be watching what happens closely and be willing to offer my contributions to any potential competitor.

Additional discussions:

Hacker News: HashiCorp adopts Business Source License

Hacker News: OpenTerraform – an MPL fork of Terraform after HashiCorp's license change

[–] [email protected] 3 points 1 year ago (1 children)

Yup, that is for the AWS CLI command. You could also use that from AWS Cloud Shell.

[–] [email protected] 2 points 1 year ago (3 children)

You can use aws iam list-instance-profiles to get a list of what is already created. I suspect there is something else wrong.

It cloud be looking for the default Beanstalk instance profile and role (aws-elasticbeanstalk-ec2-role) as it isn't auto-created anymore. It could also be a permission issue with the role's policy.

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/iam-instanceprofile.html

Elastic Beanstalk is one of the few AWS services I haven't used as it just deploys a number of other services and resources behind the scenes. It is more of a up-and-running-quick demonstration tool than something you would use IRL. It can be used, but there are better options.

[–] [email protected] 2 points 1 year ago (5 children)

An instance profile is what I would call a legacy resource that really shouldn't be needed, but is still there in the background for backwards compatibility. You can't attach an IAM role directly to an EC2 instance. You need to have an instance profile in between that is named the same as the IAM role.

You can create one using every other interface (command line, CloudFormation, Terraform, SDKs, etc.), but not through the web console (browser). From the web console, you would need to recreate the IAM role and make sure you select EC2 as the purpose/service for the role. Only then will it create a matching instance profile along-side your new IAM role.

 

cross-posted from: https://programming.dev/post/1562654

FYI to all the VS Code peeps out there that malicious extensions can gain access to secrets stored by other VS Code extensions as well as the tokens used by VS Code for Microsoft/Github.

I really don’t understand how Microsoft’s official stance on this is that this is working as intended…

If you weren’t already, be very careful about which extensions you are installing.

[–] [email protected] 3 points 1 year ago (3 children)

I picked up a Hakko desoldering gun many years ago to save me from this. It was pricey (~$300), but has been worth it over the years.

 

Aqua Trivy 2.9.0

Trivy is a comprehensive and versatile security scanner. Trivy has scanners that look for security issues, and targets where it can find those issues.

Tags: Security, Vulnerability Scanner, Monitoring

Website - Documentation - Github Home - Github Release

CoreDNS v1.11.0

CoreDNS is a fast and flexible DNS server. The key word here is flexible: with CoreDNS you are able to do what you want with your DNS data by utilizing plugins.

Tags: DNS, Kubernetes

Website - Documentation - Github Home - Github Release

Go v1.21

Go is an open source programming language that makes it easy to build simple, reliable, and efficient software.

Tags: Programming Language, Golang

Website - Documentation - Github Home - Release

Hashicorp Consul v1.16.x

Consul is a distributed, highly available, and data center aware solution to connect and configure applications across dynamic, distributed infrastructure.

Website - Documentation - Github Home - Github Release

OpenSearch 2.9.0

OpenSearch is a community-driven, open source fork of Elasticsearch and Kibanasearch. Elasticsearch can be used to search any kind of document. It provides scalable search, has near real-time search, and supports multitenancy. Kibana provides visualization capabilities on top of the content indexed on an Elasticsearch cluster.

Tags: Search Engine, Dashboards, Monitoring

Website - Documentation - Downloads - Github Home - Github Release

Podman v4.6.0

Podman (the POD MANager) is a tool for managing containers and images, volumes mounted into those containers, and pods made from groups of containers. Podman runs containers on Linux, but can also be used on Mac and Windows systems using a Podman-managed virtual machine.

Tags: Docker, Containers, Command-Line

Downloads - Github Home - Github Release

Prometheus 2.46.0 / 2023-07-25

Prometheus, a Cloud Native Computing Foundation project, is a systems and service monitoring system. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts when specified conditions are observed.

Tags: Monitoring, Observability, Dashboards, Metrics, Alerting

Website - Documentation - Downloads - Github Home - Github Release

[–] [email protected] 1 points 1 year ago (1 children)

You would have to use an external tunnel service that will give you an IPv6 address on the internet. As you are sending your traffic through an external provider, it will be slower and they could be monitoring your traffic. Some ISPs even use these tunnelling service to quickly enable IPv6 access.

Tunnel brokers (RFC 3053) are organizations that provide, often for free, a manually or dynamically configured tunnel that encapsulates your IPv6 packets within IPv4 packets. The IPv6 packets at your home are encapsulated into IPv4 packets and sent across the IPv4-only ISP network to the tunnel broker service. When those packets reach the tunnel broker, they are decapsulated and the IPv6 packets are forwarded to the IPv6 Internet. This method can use a traditional GRE tunnel, an IPv4 protocol 41 tunnel, or might leverage the Tunnel Setup Protocol (TSP) (RFC 5572).

It is looking like Hurricane Electric (https://www.tunnelbroker.net/) is the only one still providing this service, as far as I can find.

If you use a VPN that could be another option, if the VPN provider isn't disabling IPv6 out of a potential privacy concern (PIA). Even if the VPN service supports IPv6, most VPN clients do not and your IPv6 DNS queries could get routed to your ISP. If you were using a VPN for privacy concerns, that would expose what websites you are accessing and defeat the purpose of a VPN. That is why VPN providers will sometime go out of their way to ensure IPv6 is disabled when the VPN is in use.

[–] [email protected] 3 points 1 year ago (3 children)

It is looking like Canadian ISP support for IPv6 is still patchy. I am on Teksavvy which uses the Shaw network in Alberta and RogShaw doesn't like to provide their struggling micro competitors any perks. I give myself a 4% chances of getting IPv6 support to work.

If I have time this long weekend, I will try to see if I need to change anything on my Technicolor modem and setup the IPv6 DHCP service on my Mikrotik firewall. My self-managed hardware should support it, my Jekyll and Hyde ISP, probably not.

Use this to see if you ISP supports the latest 90's technology. https://test-ipv6.com/

 

I was wonder how cloud providers seemed to have a bottomless pits of IPv4 addresses and weren't more resistant to handing them out like candy. They should be charging more for this scarce resource. AWS was, until now, the only cloud provider to not charge for static public IPv4 addresses, as long as the elastic IP is in use.

I fully expect there will be more pressure in the future to have cloud customers to use dual-stack (both IPv4 and IPv6) or IPv6 only on externally facing services and pool services behind application load-balancers or web application firewalls (WAFs). WAFs should support sending incoming IP4v and IPv6 traffic to an IPv6 only server.

Looking at Imperva's (a WAF) documentation that should work. I haven't tested this, so I might just have to do that.

By default Imperva handles load balancing of IPv4 and IPv6 as follows:

  • IPv4 traffic is sent to all servers.
  • IPv6 traffic is only sent to the servers that support IPv6.
  • However, if all your servers that support IPv6 are down, then IPv6 traffic is sent to your IPv4 servers.

Imperva also enables you to configure load balancing so that IPv6 traffic is only sent to IPv6 servers and IPv4 traffic is only sent IPv4 servers. Alternatively, you can configure that Imperva sends traffic to any origin server, regardless of whether it is IPv4 or IPv6.

https://docs.imperva.com/bundle/cloud-application-security/page/more/ipv6-support.htm

 

Prometheus will soon include support for ingesting OpenTelementry metrics into the platform. Even if you understood all of those words, you might be asking, "so what?". This is a big deal for observability (fancy name for monitoring) as it is getting us one step closer to using a single agent to collect all observability telemetry (logs, metrics, traces) from servers.

Currently you would need to use something like fluentbit/fluentd to collect logs, Prometheus exporter for metrics, and OpenTelemetry for traces. There are many other tools you might use instead, but these are my current picks. If you are running VMs or physical servers, that means installing/managing three different pieces of software to cover everything. If you are running containers, that could mean up to 3 separate sidecar containers per application container within the same group/task/pod.

OpenTelementry is being positioned as a one-stop-shop for collecting and working with the three streams of telemetry data (logs, metrics, traces). Currently only trace support is production ready, but work is well under way to getting support for logs and metrics to be the ready for prime time.

There has been huge moves across the industry to add support for working with OTLP (OpenTelemetry Protocol) data streams. Prometheus is becoming the most popular backend for storing and alerting on metric data. The current blocker has been native support for OTLP ingestion and incompatible metric naming.

According to this blog post, we are close to getting these 2 issues resolved.

https://last9.io/blog/native-support-for-opentelemetry-metrics-in-prometheus/

[–] [email protected] 1 points 1 year ago (1 children)

It is preferable to have the dock power the laptops. Then there is only 1 cable to plug in. If your personal laptop has a USB-C power, it can probably be powered through it. Plugging it in to you work laptop power supply shouldn't break it as there is a lot of negotiating taking place before power is provided. You may want to search the internets first.

The Dell docks are also universal and will work. Avoid HP as they are proprietary. Some other brands (Plugable, Anker) work really well, but may not include the power adapter. Make sure you include the power adapter when comparing docks. I would get the new 100W USB-C adapters (UGreen or Anker) that can power your dock, devices, and laptop (by way of the dock).

I use a mix of Dell and Anker USB-C docks with Dell, HP, and Macbooks and run up to dual 4K displays and power the laptops (The HPs are limited).

There is a few things to watch out for. You laptop's USB-C port needs to be a Thunderbolt port to work with a Thunderbolt dock. If it isn't, you will need non-Thunderbolt USB-C dock.

The port needs to support Power Delivery (PD) and may still limit charging to 60W. You should get up to 82W after the dock takes its cut. Some laptops (Dell) support higher charging rates only with their own brand docks. If you are gaming, your battery will drain, just slowly.

The port should support Displayport even if you are using HDMI. Most docks will have a mix of DP and HDMI. You will need an ACTIVE DP to HDMI adapter. If one of your monitors has DP, use that insteaad of an adapter.

[–] [email protected] 4 points 1 year ago

Are we the lobsters?

[–] [email protected] 2 points 1 year ago

We have a slack channel where we dump a number of cloud/service outage RSS feeds into. Github has always dominated that channel.

view more: next ›