• 9 Posts
  • 41 Comments
Joined 2 years ago
cake
Cake day: July 9th, 2023

help-circle


  • Some people are asking why other regions seem to be affected when us-east-1 goes down. Why aren’t they separated out? I used to work in AWS, but will speak generally.

    First, it’s important to understand the concept of a control plane vs a data plane. Amazon and other big scale companies often talk in terms of control plane/data plane separation because those two concepts have wildly different scale and requirements.

    A control plane is the side of your service that handles the administrative functions of a service. For example, AWS S3 service would separate out bucket creation and deletion work from the file create/edit. In Route 53, this would be creating and editing zones. In IAM, it’s the creation of AWS access keys for IAM users. IAM Roles, IIRC, work differently and can function more in the data plane.

    A data plane is the side of the service that handles the main meat and potatoes of a service. For example, AWS S3 any object key creates, edits, deletes would all be part of the data plane. In Route 53, these would be any DNS record query. I don’t know if updating a record was considered a data plane call or not.

    These are separated out because data plane generally massively dwarf the number of calls for administrative APIs. It’s also done because control plane calls often times have some extra complexities. Like in Route 53, to create a zone means you need to go find n different name servers that can handle a given domain name without overlapping with another customer, you need to tell them that they should now handle calls, you need to get the records to those servers running all over the world.

    The fact is Route 53 is globally replicated and they need to have a source of truth and engineering culture pushes Amazon towards a pull based approach. If a user creates a zone in eu-west-1, they still expect it to be on servers all over the world, so how do you get it there? Well, AWS takes the approach that certain services can have a single region dependency for their control plane in the case that it’s infeasible technically or to the business to avoid one, however the data plane of the service can’t have that dependency.



  • This is a little misleading. It does not mean that every single region depends on us-east-1 to authenticate every API calls. That would be insane and obviously mean that every region has a dependency on us-east-1.

    Instead, us-east-1 is what’s called a partition leader. It holds the secret key material for everything in the commercial partition and regularly it distributes that to other regions. So if it’s down for an extended period of time, other regions IAM can be impacted, but then there’s some other complexity with STS endpoints. You can actually see the by product of this if you look at how the SigV4 signing algorithm works. Each HMAC layer is expanding the key scope.

    Anyway, this part of IAM is pretty battle tested and from I saw not the cause of today’s outage.






  • Even if an external company makes it, they can add an open source mandate if they want. The US DoD is starting to mandate the usage of open standards for their contractors to increase inter compatibility and ability to extend those systems.

    Open source software has some value like making it easier for analysts to find security issues and the act of open sourcing software usually leads organisations to raise the quality because they don’t want to be ashamed of the code. Plus imagine the clout gained by a dev who got a bug fix merged in that millions of citizens get to use.


  • chaospatterns@lemmy.worldtoSelfhosted@lemmy.worldOpen-WebUI v0.6.29 release
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    2 months ago

    A newer release, v0.6.30 is already released to fix an issue with OneDrive integration.

    Looks like they finally finally made their slim image tag smaller than the main image:

    ghcr.io/open-webui/open-webui:v0.6.30-slim    7c61b17433e8   46 hours ago    4.3GB
    ghcr.io/open-webui/open-webui:v0.6.30         c1ac444c0471   46 hours ago    4.82GB
    

    Though only saving .5GB of space is not very slim. I use OpenWebUI in my home lab, but this issue just made me question the quality of the project a tiny bit.






  • Gluetun doesn’t make any sense here. You’re forcing all the traffic for from Jellyfin to go through Mullvad, but you need to be able to connect to Jellyfin because Jellyfin is a service you connect to.

    Since your Tailscale is host network mounted, you’ll be able to expose your Docker network subnets over Tailscale then access Jellyfin. This is done via the TS_SUBNETS env variable. Docker will use a 172.16.0.0/12 subnet.

    You probably intend to gluetun your downloading software, not Jellyfin.






  • I don’t think there is a technical issue or any kind of complexity at issue here, the problem seems trivial even though I haven’t worked the details. It is moot since it’s broken on purpose to preserve “They’s” business model.

    I’m explaining what the technical problems are with your idea. It seems like you don’t fully understand the technical details of these networking protocols and that’s okay but I’ve summarized a few non trivial technical problems that aren’t just people keeping multicast from being used. I assure you if multicast worked, big tech would want to use it. For example, Netflix would want to use it to distribute content to their CDN boxes and save tons of bandwidth.


  • I don’t know who they is in the case, but let’s think about this for a minute.

    Technically what do you need for this to work?

    How many Multicast Addresses do you need? How are multicast addresses assigned? Can anybody write to any multicast address? How do I decide that 239.53.244.53 is for my file not your movie? How do we know who is listening? This is effectively BGP, but more tricky because depending on the answer to the previous question you may not benefit from any network block sizes to reduce the routing info being shared. How do you decide when to start transmitting a file? Is anybody listening? Does anybody care?

    You seem latched on to assume that technically would work and haven’t asked if it is actually technically a good solution. P2P is going to work better than multicast