Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Distributed Systems pt 3 #53

Open
berkeli opened this issue Nov 25, 2022 · 1 comment
Open

Distributed Systems pt 3 #53

berkeli opened this issue Nov 25, 2022 · 1 comment
Assignees
Milestone

Comments

@berkeli
Copy link
Owner

berkeli commented Nov 25, 2022

Read: https://docs.google.com/document/d/1WoOTLTdtDqnL3fv3YVfI32kfySHqh7y1UfLizBJ3LXY/edit#heading=h.ep7stp2jwevq

@berkeli berkeli self-assigned this Nov 25, 2022
@berkeli berkeli added this to the Sprint 3 milestone Nov 25, 2022
@berkeli berkeli moved this from Todo to In Progress in Immersive Go Course Nov 25, 2022
@berkeli
Copy link
Owner Author

berkeli commented Nov 28, 2022

Name 5 functions of load balancers

Load balancing, ensuring load is distributed to available backends
Service Discovery
Health checking (active/passive)
Sticky sessions (makeing sure requests form clients receive the corresponding response from backend)
Observability

Why might you use a Layer 7 loadbalancer instead of a Layer 4?

L7 load balancers contain much more information about requests than L4, which allows for a more fine tuned load balancing.

When might you use a Layer 4 loadbalancer?

When you need a much simpler TCP/UDP load balancing, that doesn't require forwarding of connections based on their content.

Give a reason to use a sidecar proxy architecture rather than a middle proxy

Middle proxy can be a single point of failure even if it is distributed, also they make it hard to understand where the problem lies if there's a problem in the system.

Why would you use Direct Server Return?

Load balancing can be quite expensive if every response from the server goes through a load balancer when travelling to the client. To eliminate this overhead, DSR sends response directly to the client.

What is a Service Mesh? What are the advantages and disadvantages of a service mesh?

Service mesh is having infrastructure layer in your app so that services do not communicate directly with each but via sidecar proxy. Each service has it's own sidecar proxy and this allows for a more control over what kind of requests can be made.
It also simplifies services, as this logic doesn't need to be coded into the application.

What is a VIP? What is Anycast?

VIP - virtual IP addresses
Anycast - routing methodology where single IP is shared by multiple devices and when a request comes through it is typically send to the closest server, or to the one that requires least amount of hops.

Why doesn’t autoscaling work to redistribute load in systems with long-lived connections?*

Auto-scaling works only for new connections, because long-lived connections get distributed at the time of establishing the connection. Transferring of live connections from 1 backend to another isn't a job for auto-scaling or load balancer.

How can we make these kinds of systems robust?

One solution is to obtain the list of backends and keep it refreshed. Then establish a connection to each backend that will be kept alive, which then can be used to make requests on a round robin basis or more sophisticated load balancing.

@berkeli berkeli moved this from In Progress to In Review in Immersive Go Course Nov 28, 2022
@berkeli berkeli moved this from In Review to Done in Immersive Go Course Dec 5, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Done
Development

No branches or pull requests

1 participant