快速成长期应用架构实践 (20):大应用编排

勿忘初心2018-11-15 14:03

欢迎访问网易云社区,了解更多网易技术产品运营经验。


4.6.4 大应用编排


一套大的应用方案包含多个模块和服务,通常做法是编排一套脚本,在脚本中定义好 各个模块的配置和依赖,就可以用脚本在多个环境中部署。Docker 容器服务使应用编排更 便捷,Docker Compose 的语法已在大多数云平台得到广泛的应用。


本节将介绍如何在原生的 Kubernetes 平台上编排一套 ELK(Elasticsearch、Logstash 和 Kibana)应用服务。语法为 Yaml 格式。


原生 Kubernetes 平台搭建 

搭建一个 Kubernetes 集群可以用开源软件 minikube 或 kubeadm。具体安装细节请参考 官方网站。

minikube: https://github.com/kubernetes/minikube

kubeadm: https://kubernetes.io/docs/getting-started-guides/kubeadm/

构建 Elasticsearch、Logstash 和 Kibana 镜像
Elasticsearch
Elasticsearch 选取 2.4.4 版本配置如下。
elasticsearch.yml

network.bind_host: "0.0.0.0"  
network.publish_host:
_non_loopback_ http.port:


Dockerfile

FROM elasticsearch:2.4.4-alpine

# 监听第一个本地非回路设备

ADD elasticsearch.yml /usr/share/elasticsearch/config/elasticsearch.yml

# 修改配置文件的访问权限,得证 ES 系统可以读取

RUN chmod a+r /usr/share/elasticsearch/config/elasticsearch.yml

Logstash
Logstash 镜像选取 2.4.1 版本配置如下。
logtash.conf

input {   
courier {
transport => "tcp"
port => 8600 } }

filter {
if [type] == "nginx" {
grok {
match => { "message" => "%{NGINXACCESS}" }
}

date {

match => [ "timestamp" , "dd/MMM/YYYY:HH:mm:ss Z" ]

}

geoip {

source => "clientip"

} } }

output {

elasticsearch {

hosts => "elasticsearch" } }

测试的 nginx 日志模板如下。
nginx 

本次使用的日志 agent 为 log courier,所以需要安装 logstash-input-courier 插件。
Dockerfile

FROM logstash:2-alpine

Kibana
Kibana 的配置如下。
kibana.yml

# 配置 Kibana 服务器的监听端口
server.port: 5601
# 绑定的 IP 地址,"0.0.0.0"表示在所有地址上监听 server.host: "0.0.0.0"
# Elasticsearch 实例对应的地址 elasticsearch.url: "http://elasticsearch:9200"

Dockerfile

FROM alpine 
# 设置环境变量
ENV KIBANA_VERSION 4.6.4
ENV PKG_NAME kibana
ENV PKG_PLATFORM linux-x86_64
ENVKIBANA_PKG$PKG_NAME-$KIBANA_VERSION-$PKG_PLATFORM
ENV KIBANA_CONFIG /opt$PKG_NAME-$KIBANA_VERSION-$PKG_PLATFORM/config/ kibana.yml
ENV KIBANA_URL https://download.elastic.co/$PKG_NAME/$PKG_NAME/
$KIBANA_PKG.tar.gz 
ENV ELASTICSEARCH_HOST elasticsearch
RUN addgroup -S kibana
&& adduser -S -G kibana kibana # 下载 Kibana RUN apk add --update ca-certificates wget nodejs \
&& mkdir -p /opt \
&& wget -O /tmp/$KIBANA_PKG.tar.gz $KIBANA_URL \ && tar -xvzf /tmp/$KIBANA_PKG.tar.gz -C /opt/ \
&& ln -s /opt/$KIBANA_PKG /opt/$PKG_NAME \
&& sed -i "s/localhost/$ELASTICSEARCH_HOST/" $KIBANA_CONFIG \
&& rm -rf /tmp/*.tar.gz /var/cache/apk/* /opt/$KIBANA_PKG/node/ \
&& mkdir -p /opt/$KIBANA_PKG/node/bin/ \
&& ln -s $(which node) /opt/$PKG_NAME/node/bin/node \ && chown -R kibana:kibana /opt
# 暴露服务
EXPOSE 5601
# 添加 Kibana 默认配置
ADD kibana.yml /opt/kibana/config/kibana.yml
USER kibana
# 设置工作目录
WORKDIR ["/opt/kibana"]
CMD ["/opt/kibana/bin/kibana"]

Kubernetes 原生编排模板
本测试为实验性质故将ELK 3个镜像配置在同一个pod中,然后暴露logstash和kibana 两个服务,应用直接访问 logstash 和 kibana 服务。配置模板如下。
elk-deploy.yml

apiVersion: 
extensions/v1beta1 kind:
Deployment metadata:
name: elk-rc labels:
k8s-app: elk
spec:  
template:
metadata:
labels:
k8s-app:
elk
spec:
containers:
- image: hub.c.163.com/gobyoung/elasticsearch:2.4.0-elk name: elasticsearch
ports:
- containerPort: 9200
volumeMounts:
- name: es-storage mountPath: /usr/share/elasticsearch/data
- image: hub.c.163.com/gobyoung/logstash:2-elk name: logstash
ports:
- containerPort: 8600
- image: hub.c.163.com/gobyoung/kibana:4-elk
name: kibana
env:
- name: ELASTICSEARCH_URL
value: http://127.0.0.1:9200
ports:
- containerPort: 5601
volumes:
- name: es-storage
emptyDir: {} # 由于 es 是有状态的,这里可以改成有状态盘

elk-svc-kibana.yaml

apiVersion:
v1 kind:
Service metadata:
name: kibana labels:
k8s-app:
elk spec:
selector:
k8s-app:
elk ports:
- port: 5601
name:
kibana

elk-svc-logstash.yaml

apiVersion: v1  
kind: Service
metadata:
name: logstash
labels:
k8s-app: elk
spec:
selector:
k8s-app: elk
ports:
- port: 8600
name: logstash

创建 pod 和服务

kubectl create -f elk-deploy.yaml  
kubectl create -f elk-svc-logstash.yaml
kubectl create -f elk-svc-kibana.yaml
kubectl get pods -o wide kubectl get services -o wide

最后打开 http://kibana.default.svc.cluster.local:5601 即可调用服务。 
测试镜像准备
使用 nginx 作为服务器进行测试,对应的 nginx.conf 文件内容如下。

worker_processes 1; 

error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;

events {
worker_connections 1024; }

http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;

log_format logstash '$http_host ' '$remote_addr [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent" ' '$request_time ' '$upstream_response_time'; access_log /var/log/nginx/access.log logstash;

server {
listen 80;
server_name localhost;

location / {
root /usr/share/nginx/html/k8s-elk-demo;
index index.html;
}
}
}

Dockerfile

FROM nginx:alpine 
ADD nginx.conf /etc/nginx/nginx.conf

数据收集端为 log-courier,配置如下。
log-courier.conf

{      
"network": {
"servers":
[ "logstash:8600" ],
"transport": "tcp"
},
"files": [
{
"paths": [ "/var/log/nginx/access.log" ],
"fields": { "type": "nginx" }
}
] }

Dockerfile

FROM centos:7 
# 安装 log-courier
ADD https://copr.fedoraproject.org/coprs/driskell/log-courier/repo/ epel-7/driskell-log-courier-epel-7.repo /etc/yum.repo s.d/
RUN yum install -y epel-release --nogpgcheck && yum install -y log-courier --nogpgcheck && yum install nginx
# 配置 log-courier
ADD log-courier.conf /etc/log-courier/log-courier.conf
CMD log-courier -config=/etc/log-courier/log-courier.conf

测试 Deployment
要进行集成测试,需要 log-courier 和 nginx 容器共享日志、存储盘 log-storage(测试时 不考虑两个容器共享目录的写一致问题) 。

demo.yaml

--- apiVersion: v1 
kind: Service metadata:
name: demo-service labels:
k8s-app: demo-app spec:
type: LoadBalancer selector:
lb-target: web ports:
- port: 80
name: http
--- apiVersion: extensions
/v1beta1 kind: Deployment metadata:
name: demo-app-v1 labels:
k8s-app: demo-app-v1 spec:
replicas: 3 template:
metadata: labels:
k8s-app: demo-app-v1
lb-target: web spec:
containers:
- image: hub.c.163.com/gobyoung/elk:log-courier-latest name: log-courier
volumeMounts:
- name: log-storage       
mountPath: /var/log/nginx
- image: hub.c.163.com/gobyoung/elk:nginx-demo name: nginx
ports:
- containerPort: 80
volumeMounts:
- name: log-storage
mountPath: /var/log/nginx
- name: www-storage mountPath: /usr/share/nginx/html
volumes:
- name: www-storage
gitRepo:
repository: https://github.com/tazjin/k8s-elk-demo.git
revision: static-v1
- name: log-storage
emptyDir: {}

生成模板
根上面的测试最终生成网易云基础服务平台下的一个 elk 编排模板为 elk.yaml

--- # Kibana 服务 apiVersion: v1 
kind: Service metadata:
name: kibana
namespace: laidonglin-k8sk8sk8s
# change to your ns labels:
name: elk-demo-kibana
spec: selector:
name: elk-demo-kibana ports:


- port: 5601  
name: kibana
---
# LogStash 服务
apiVersion: v1
kind: Service metadata:
name: logstash
namespace: laidonglin-k8sk8sk8s
labels:
name: elk-demo-logstash
spec: selector:
name: elk-demo-logstash
ports: - port: 8600
name: logstash
---
# ElasticSearch 服务
apiVersion: v1
kind: Service metadata:
name: elasticsearch
namespace: laidonglin-k8sk8sk8s
labels:
name: elk-demo-es spec:
selector:
name: elk-demo-es
ports:
- port: 9200
name: elasticsearch
---
apiVersion: extensions
/v1beta1 kind: Deployment


metadata:  
name: elk-demo-es
namespace: laidonglin-k8sk8sk8s
labels:
name: elk-demo-es spec:
replicas: 1
minReadySeconds: 10
revisionHistoryLimit: 1
template:
metadata:
labels:
name: elk-demo-es
name: elk-demo-es
namespace: laidonglin-k8sk8sk8s
spec:
containers:
- image:
hub.c.163.com/elkdemo/elasticsearch:2.4.4-alpine
name: elasticsearch
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9200
volumeMounts:
- name: es-storage mountPath: /usr/share/elasticsearch/data
resources:
limits:
cpu: "1"
memory: "1073741824"
requests:
cpu: "1"
memory: "1073741824"
volumes:
- name: es-storage
emptyDir: {}


# 为部署设置网络和节点选择器  
node:
cpu: 1000m
memory: "1073741824"
networks:
- netType: lan
nodeSelector:
stateful: "false"
tenantid: 6089d765c34a446e93778e1cd4133f72 ---
apiVersion: extensions/v1beta1 kind: Deployment metadata:
name: elk-demo-logstash
namespace: laidonglin-k8sk8sk8s
labels:
name: elk-demo-logstash spec:
replicas: 1
minReadySeconds: 10
revisionHistoryLimit: 1
template:
metadata:
labels:
name: elk-demo-logstash
name: elk-demo-logstash
namespace: laidonglin-k8sk8sk8s
spec:
containers:
- image:
hub.c.163.com/elkdemo/logstash:2-alpine
name: logstash
ports:
- containerPort: 8600
resources:
limits:


cpu: "1"  
memory: "1073741824"
requests:
cpu: "1"
memory: "1073741824"
node:
cpu: 1000m
memory: "1073741824"
networks:
- netType: lan
nodeSelector:
flavorid: "176"
stateful: "false"
tenantid: 6089d765c34a446e93778e1cd4133f72
# 必须的
---
apiVersion: extensions/v1beta1
kind: Deployment metadata:
name: elk-demo-kibana
namespace: laidonglin-k8sk8sk8s
labels:
name: elk-demo-kibana
spec:
replicas: 1
minReadySeconds: 10
revisionHistoryLimit: 1
template:
metadata:
labels:
name: elk-demo-kibana
name: elk-demo-kibana
namespace: laidonglin-k8sk8sk8s
spec:
containers:
- image: hub.c.163.com/elkdemo/kibana:4.6.4-alpine


name: kibana      
env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch:9200
ports:
- containerPort: 5601
resources:
limits:
cpu: "1"
memory: "1073741824"
requests:
cpu: "1"
memory: "1073741824"
#为部署设置网络和节点选择器
node:
cpu: 1000m
memory: "1073741824"
networks:
- netType: lan
nodeSelector:
stateful: "false"


文章节选自《云原生应用架构实践》 网易云基础服务架构团队 著 


网易云计算基础服务深度整合了 IaaSPaaS 及容器技术,提供弹性计算、DevOps 工具链及微服务基础设施等服务,帮助企业解决 IT、架构及运维等问题,使企业更聚焦于业务,是新一代的云计算平台。点击可免费试用