摘要:apiVersion: k8s.nginx.org/v1 kind: VirtualServer metadata: name: cyh-nginx-virtualserver namespace: cyh spec: host: cyh.test.com tls: secret: cyh-nginx-crt upstreams: - name: http-svc service: http-svc port: 80 - name: https-svc # nginx upstream name,但並非實際nginx config中生成的upsteam名稱 service: https-svc # kubernetes service name tls: # https 必須開啓enable,否則會報400,也就是使用http協議訪問https,開啓了這纔會在proxy_pass處使用https enable: true port: 443 routes: - path: / # nginx location 配置 matches: - conditions: # 條件匹配進行基於內容的高級路由,流量權重的設置用Split。apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: ingress.kubernetes.io/config-backend: "" ingress.kubernetes.io/ssl-passthrough: "false" ingress.kubernetes.io/ssl-redirect: "false" kubernetes.io/ingress.class: haproxy labels: app: cyh-nginx name: https-ingress namespace: cyh spec: rules: - host: cyh.test.com http: paths: - backend: serviceName: https-svc servicePort: 443 path: / tls: - hosts: - cyh.test.com secretName: cyh-nginx-crt。

【編者的話】是否還在爲Kubernetes的灰度發佈,新舊版本流量控制,基於用戶羣體流量拆分等流量拆分及基於內容的高級路由而頭疼,且看本文帶你入門帶你飛。

背景

由於業務需要針對http和https流量區分處理,爲此針對Kubernetes Ingress實現http和https分流做了一番調研。

流量分佈圖

流量分佈如圖所示:

方案調研

Kubernetes Service config

http-svc:

apiVersion: v1

kind: Service

metadata:

labels:

app: cyh-nginx

name: http-svc

namespace: cyh

spec:

ports:

- name: http

port: 80

protocol: TCP

targetPort: 80

selector:

app: cyh-nginx

sessionAffinity: None

type: ClusterIP

https-svc:

apiVersion: v1

kind: Service

metadata:

labels:

app: cyh-nginx

name: https-svc

namespace: cyh

spec:

ports:

- name: https

port: 443

protocol: TCP

targetPort: 443

selector:

app: cyh-nginx

sessionAffinity: None

type: ClusterIP

準備兩個Service是爲了Ingress Controller配置路由路由到不同的SVC上,進而選擇不同的後端(80/443)。

HAProxy Ingress Controller

http-ingress config:

apiVersion: extensions/v1beta1

kind: Ingress

metadata:

annotations:

ingress.kubernetes.io/config-backend: ""

ingress.kubernetes.io/ssl-redirect: "false"

kubernetes.io/ingress.class: haproxy

labels:

app: cyh-nginx

name: http-ingress

namespace: cyh

spec:

rules:

- host: cyh.test.com

http:

  paths:

  - backend:

      serviceName: http-svc

      servicePort: 80

    path: /

https-ingress config:

apiVersion: extensions/v1beta1

kind: Ingress

metadata:

annotations:

ingress.kubernetes.io/config-backend: ""

ingress.kubernetes.io/ssl-passthrough: "false"

ingress.kubernetes.io/ssl-redirect: "false"

kubernetes.io/ingress.class: haproxy

labels:

app: cyh-nginx

name: https-ingress

namespace: cyh

spec:

rules:

- host: cyh.test.com

http:

  paths:

  - backend:

      serviceName: https-svc

      servicePort: 443

    path: /

tls:

- hosts:

- cyh.test.com

secretName: cyh-nginx-crt

應用之後查看HAProxy Ingress的關鍵配置如下:

........

acl host-cyh.test.com var(txn.hdr_host) -i cyh.test.com cyh.test.com:80 cyh.test.com:443 

........

use_backend cyh-http-svc if host-cyh.test.com

上述配置是http-ingress先創建,https-ingress後創建的結果。從上述的配置可以看出,要想實現http和https分流需要通過acl進行判斷當前的請求是http請求還是https請求,如:

........

acl host-cyh.test.com var(txn.hdr_host) -i cyh.test.com cyh.test.com:80 cyh.test.com:443 

acl https-cyh.test.com var(txn.hdr_host) -i cyh.test.com cyh.test.com:80 cyh.test.com:443  

........



use_backend https-ingress-443 if from-https https-cyh.test.com

use_backend cyh-http-svc if host-cyh.test.com

理想是豐滿的,現實卻是殘酷的。當前HAProxy Ingress並沒有類似:

ingress.kubernetes.io/config-frontend: ""

的annotations。但我們能否可以通過HAProxy Ingress ConfigMap自定義frontend acl及use_backend規則呢。其實也不行,經驗證當ConfigMap上配置frontend acl及use_backend規則時,HAProxy reload將會拋出如下WARNING:

rule placed after a 'use_backend' rule will still be processed before. 

這意思就是ConfigMap上配置的frontend acl及use_backend和已有的衝突了不會生效;即使進去查看haproxy.conf文件確實配置上了,但其實HAProxy使用的配置已被加載至內存,可以參考 haproxy-ingress-issue-frontend 。 另外也可以從HAProxy Ingress Controller的源碼:

kubernetes-ingess/controller/frontend-annotations.go:

func (c *HAProxyController) handleRequestCapture(ingress *Ingress) error {

........

// Update rules

mapFiles := c.cfg.MapFiles

for _, sample := range strings.Split(annReqCapture.Value, "\n") {

    key := hashStrToUint(fmt.Sprintf("%s-%s-%d", REQUEST_CAPTURE, sample, captureLen))

    if status != EMPTY {

        mapFiles.Modified(key)

        c.cfg.HTTPRequestsStatus = MODIFIED

        c.cfg.TCPRequestsStatus = MODIFIED

        if status == DELETED {

            break

        }

    }

    if sample == "" {

        continue

    }

    for hostname := range ingress.Rules {

        mapFiles.AppendHost(key, hostname)

    }



    mapFile := path.Join(HAProxyMapDir, strconv.FormatUint(key, 10)) + ".lst"

    httpRule := models.HTTPRequestRule{

        ID:            utils.PtrInt64(0),

        Type:          "capture",

        CaptureSample: sample,

        Cond:          "if",

        CaptureLen:    captureLen,

        CondTest:      fmt.Sprintf("{ req.hdr(Host) -f %s }", mapFile),

    }

    tcpRule := models.TCPRequestRule{

        ID:       utils.PtrInt64(0),

        Type:     "content",

        Action:   "capture " + sample + " len " + strconv.FormatInt(captureLen, 10),

        Cond:     "if",

        CondTest: fmt.Sprintf("{ req_ssl_sni -f %s }", mapFile),

    }

    ........

} 

洞悉frontend的配置都是代碼直接控制。並沒有自定義入口。 爲此HAProxy Ingress無法實現http和https分流。

Nginx Ingress Controller

負載均衡服務器除了HAProxy,還有應用廣泛的Nginx。對Nginx Ingress Controller的Ingress方案調研了一番能做切入點的只有:

nginx.org/location-snippets

nginx.org/server-snippets

這兩組合來自定義Location和Server,但Location只能根據path來匹配,都是"/" 路徑這樣並不合適,爲此此方案並未驗證;其實也是過於複雜。本文在此介紹另外的很簡單的方案: NGINX VirtualServer and VirtualServerRouteNGINX VirtualServer and VirtualServerRoute 是在Nginx Ingress Controller release 1.5版本纔開始支持,VirtualServer and VirtualServerRoute熟悉F5的比較清楚,Nginx VirtualServer and VirtualServerRoute的引入是爲了替代掉Nginx Ingress,它具備更了Nginx Ingress不具備的能力: 流量拆分和基於內容的高級路由 ,這些能力在Nginx Ingress層面是自定義資源。安裝Nginx Ingress Controller請參考: Installation with Manifests ,當前是1.6.3版本。創建 Nginx VirtualServer and VirtualServerRoute 資源實現http和https分流。

cyh-nginx-virtualserverroute.yaml:

apiVersion: k8s.nginx.org/v1

kind: VirtualServer

metadata:

name: cyh-nginx-virtualserver

namespace: cyh

spec:

host: cyh.test.com

tls:

secret: cyh-nginx-crt

upstreams:

- name: http-svc

service: http-svc

port: 80

- name: https-svc # nginx upstream name,但並非實際nginx config中生成的upsteam名稱

service: https-svc # kubernetes service name

tls: # https 必須開啓enable,否則會報400,也就是使用http協議訪問https,開啓了這纔會在proxy_pass處使用https

  enable: true

port: 443

routes:

- path: / # nginx location 配置

  matches:

  - conditions: # 條件匹配進行基於內容的高級路由,流量權重的設置用Split。

    - variable: $scheme # 變量或者header,cookie,argument參數等,此處$scheme 表示http或https

      value: "https" # 當$scheme是https時走https-svc的後端,默認其他流量均走http

    action:

      pass: https-svc

  action:

    pass: http-svc

應用後,來看看實際Nginx的配置:

/etc/nginx/conf.d/vs_xxx.conf:

upstream vs_cyh_nginx-virtualserver_http-svc {

zone vs_cyh_nginx-virtualserver_http-svc 256k;

random two least_conn;

    server 172.49.40.251:80 max_fails=1 fail_timeout=10s max_conns=0;        

}

upstream vs_cyh_nginx-virtualserver_https-svc {

zone vs_cyh_nginx-virtualserver_https-svc 256k;

random two least_conn;

    server 172.x.40.y:443 max_fails=1 fail_timeout=10s max_conns=0;        

}

map $scheme $vs_cyh_nginx_virtualserver_matches_0_match_0_cond_0 {

    "https" 1;

    default 0;

} # https 路由匹配

map $vs_cyh_nginx_virtualserver_matches_0_match_0_cond_0 $vs_cyh_nginx_virtualserver_matches_0 {

    ~^1 @matches_0_match_0;

    default @matches_0_default;

} # 默認路由匹配

server {

listen 80;

server_name cyh.test.com;

    listen 443 ssl;

ssl_certificate /etc/nginx/secrets/cyh-cyh-nginx-crt;

ssl_certificate_key /etc/nginx/secrets/cyh-cyh-nginx-crt;

               server_tokens "on";

                   location / {

    error_page 418 = $vs_cyh_nginx_virtualserver_matches_0;

    return 418;

}        

location @matches_0_match_0 {

                            proxy_connect_timeout 60s;

    proxy_read_timeout 60s;

    proxy_send_timeout 60s;

    client_max_body_size 1m;

                proxy_buffering on;

                            proxy_http_version 1.1;

    set $default_connection_header close;

    proxy_set_header Upgrade $http_upgrade;

    proxy_set_header Connection $vs_connection_header;

    proxy_set_header Host $host;

    proxy_set_header X-Real-IP $remote_addr;

    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    proxy_set_header X-Forwarded-Host $host;

    proxy_set_header X-Forwarded-Port $server_port;

    proxy_set_header X-Forwarded-Proto $scheme;

    proxy_pass https://vs_cyh_nginx-virtualserver_https-svc;

    proxy_next_upstream error timeout;

    proxy_next_upstream_timeout 0s;

    proxy_next_upstream_tries 0;

        }

    location @matches_0_default {

                            proxy_connect_timeout 60s;

    proxy_read_timeout 60s;

    proxy_send_timeout 60s;

    client_max_body_size 1m;

                proxy_buffering on;

                            proxy_http_version 1.1;

    set $default_connection_header close;

    proxy_set_header Upgrade $http_upgrade;

    proxy_set_header Connection $vs_connection_header;

    proxy_set_header Host $host;

    proxy_set_header X-Real-IP $remote_addr;

    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    proxy_set_header X-Forwarded-Host $host;

    proxy_set_header X-Forwarded-Port $server_port;

    proxy_set_header X-Forwarded-Proto $scheme;

    proxy_pass http://vs_cyh_nginx-virtualserver_http-svc;

    proxy_next_upstream error timeout;

    proxy_next_upstream_timeout 0s;

    proxy_next_upstream_tries 0;

        }

} 

使用Nginx Ingress Controller VirtualServer and VirtualServerRoute基於內容高級路由和流量拆分一切都變得如此簡單。其實Nginx Ingress Controller VirtualServer and VirtualServerRoute從http和https分流還可以延伸至基於內容的高級路由或流量拆分 (比如:當有一個預發佈環境和正式環境來測試一項新特性需要根據用戶進行流量拆分,張三流量放到新特性的預發佈環境,而其他的人員流量仍然導到正式環境) 都變得如此簡單。關於Nginx Ingress Controller VirtualServer and VirtualServerRoute更多使用請參考: VirtualServer and VirtualServerRoute Resources

結束語

當然除了Nginx Ingress Controller VirtualServer and VirtualServerRoute的方案外,還可以採用傳統的HAProxy TCP轉發或LVS TCP轉發,只需要將兩者的VIP與Kubernetes的Endpoints綁定即可,這兩種方案本文不進行介紹。

參考

相關文章