TUN-813: Clean up cloudflared dependencies

This commit is contained in:
Areg Harutyunyan
2018-07-24 18:04:33 -05:00
parent d06fc520c7
commit 0468866626
3310 changed files with 993 additions and 1223303 deletions

View File

@@ -1,26 +0,0 @@
# Compiled Object files, Static and Dynamic libs (Shared Objects)
*.o
*.a
*.so
# Folders
_obj
_test
# Architecture specific extensions/prefixes
*.[568vq]
[568vq].out
*.cgo1.go
*.cgo2.c
_cgo_defun.c
_cgo_gotypes.go
_cgo_export.*
_testmain.go
*.exe
*~
*#
.build

View File

@@ -1,10 +0,0 @@
sudo: false
language: go
go:
- 1.6.3
- 1.7
- 1.8.1
script:
- go test -short ./...

View File

@@ -1,109 +0,0 @@
## 0.8.0 / 2016-08-17
* [CHANGE] Registry is doing more consistency checks. This might break
existing setups that used to export inconsistent metrics.
* [CHANGE] Pushing to Pushgateway moved to package `push` and changed to allow
arbitrary grouping.
* [CHANGE] Removed `SelfCollector`.
* [CHANGE] Removed `PanicOnCollectError` and `EnableCollectChecks` methods.
* [CHANGE] Moved packages to the prometheus/common repo: `text`, `model`,
`extraction`.
* [CHANGE] Deprecated a number of functions.
* [FEATURE] Allow custom registries. Added `Registerer` and `Gatherer`
interfaces.
* [FEATURE] Separated HTTP exposition, allowing custom HTTP handlers (package
`promhttp`) and enabling the creation of other exposition mechanisms.
* [FEATURE] `MustRegister` is variadic now, allowing registration of many
collectors in one call.
* [FEATURE] Added HTTP API v1 package.
* [ENHANCEMENT] Numerous documentation improvements.
* [ENHANCEMENT] Improved metric sorting.
* [ENHANCEMENT] Inlined fnv64a hashing for improved performance.
* [ENHANCEMENT] Several test improvements.
* [BUGFIX] Handle collisions in MetricVec.
## 0.7.0 / 2015-07-27
* [CHANGE] Rename ExporterLabelPrefix to ExportedLabelPrefix.
* [BUGFIX] Closed gaps in metric consistency check.
* [BUGFIX] Validate LabelName/LabelSet on JSON unmarshaling.
* [ENHANCEMENT] Document the possibility to create "empty" metrics in
a metric vector.
* [ENHANCEMENT] Fix and clarify various doc comments and the README.md.
* [ENHANCEMENT] (Kind of) solve "The Proxy Problem" of http.InstrumentHandler.
* [ENHANCEMENT] Change responseWriterDelegator.written to int64.
## 0.6.0 / 2015-06-01
* [CHANGE] Rename process_goroutines to go_goroutines.
* [ENHANCEMENT] Validate label names during YAML decoding.
* [ENHANCEMENT] Add LabelName regular expression.
* [BUGFIX] Ensure alignment of struct members for 32-bit systems.
## 0.5.0 / 2015-05-06
* [BUGFIX] Removed a weakness in the fingerprinting aka signature code.
This makes fingerprinting slower and more allocation-heavy, but the
weakness was too severe to be tolerated.
* [CHANGE] As a result of the above, Metric.Fingerprint is now returning
a different fingerprint. To keep the same fingerprint, the new method
Metric.FastFingerprint was introduced, which will be used by the
Prometheus server for storage purposes (implying that a collision
detection has to be added, too).
* [ENHANCEMENT] The Metric.Equal and Metric.Before do not depend on
fingerprinting anymore, removing the possibility of an undetected
fingerprint collision.
* [FEATURE] The Go collector in the exposition library includes garbage
collection stats.
* [FEATURE] The exposition library allows to create constant "throw-away"
summaries and histograms.
* [CHANGE] A number of new reserved labels and prefixes.
## 0.4.0 / 2015-04-08
* [CHANGE] Return NaN when Summaries have no observations yet.
* [BUGFIX] Properly handle Summary decay upon Write().
* [BUGFIX] Fix the documentation link to the consumption library.
* [FEATURE] Allow the metric family injection hook to merge with existing
metric families.
* [ENHANCEMENT] Removed cgo dependency and conditional compilation of procfs.
* [MAINTENANCE] Adjusted to changes in matttproud/golang_protobuf_extensions.
## 0.3.2 / 2015-03-11
* [BUGFIX] Fixed the receiver type of COWMetric.Set(). This method is
only used by the Prometheus server internally.
* [CLEANUP] Added licenses of vendored code left out by godep.
## 0.3.1 / 2015-03-04
* [ENHANCEMENT] Switched fingerprinting functions from own free list to
sync.Pool.
* [CHANGE] Makefile uses Go 1.4.2 now (only relevant for examples and tests).
## 0.3.0 / 2015-03-03
* [CHANGE] Changed the fingerprinting for metrics. THIS WILL INVALIDATE ALL
PERSISTED FINGERPRINTS. IF YOU COMPILE THE PROMETHEUS SERVER WITH THIS
VERSION, YOU HAVE TO WIPE THE PREVIOUSLY CREATED STORAGE.
* [CHANGE] LabelValuesToSignature removed. (Nobody had used it, and it was
arguably broken.)
* [CHANGE] Vendored dependencies. Those are only used by the Makefile. If
client_golang is used as a library, the vendoring will stay out of your way.
* [BUGFIX] Remove a weakness in the fingerprinting for metrics. (This made
the fingerprinting change above necessary.)
* [FEATURE] Added new fingerprinting functions SignatureForLabels and
SignatureWithoutLabels to be used by the Prometheus server. These functions
require fewer allocations than the ones currently used by the server.
## 0.2.0 / 2015-02-23
* [FEATURE] Introduce new Histagram metric type.
* [CHANGE] Ignore process collector errors for now (better error handling
pending).
* [CHANGE] Use clear error interface for process pidFn.
* [BUGFIX] Fix Go download links for several archs and OSes.
* [ENHANCEMENT] Massively improve Gauge and Counter performance.
* [ENHANCEMENT] Catch illegal label names for summaries in histograms.
* [ENHANCEMENT] Reduce allocations during fingerprinting.
* [ENHANCEMENT] Remove cgo dependency. procfs package will only be included if
both cgo is available and the build is for an OS with procfs.
* [CLEANUP] Clean up code style issues.
* [CLEANUP] Mark slow test as such and exclude them from travis.
* [CLEANUP] Update protobuf library package name.
* [CLEANUP] Updated vendoring of beorn7/perks.
## 0.1.0 / 2015-02-02
* [CLEANUP] Introduced semantic versioning and changelog. From now on,
changes will be reported in this file.

View File

@@ -1,18 +0,0 @@
# Contributing
Prometheus uses GitHub to manage reviews of pull requests.
* If you have a trivial fix or improvement, go ahead and create a pull request,
addressing (with `@...`) the maintainer of this repository (see
[MAINTAINERS.md](MAINTAINERS.md)) in the description of the pull request.
* If you plan to do something more involved, first discuss your ideas
on our [mailing list](https://groups.google.com/forum/?fromgroups#!forum/prometheus-developers).
This will avoid unnecessary work and surely give you and us a good deal
of inspiration.
* Relevant coding style guidelines are the [Go Code Review
Comments](https://code.google.com/p/go-wiki/wiki/CodeReviewComments)
and the _Formatting and style_ section of Peter Bourgon's [Go: Best
Practices for Production
Environments](http://peter.bourgon.org/go-in-production/#formatting-and-style).

View File

@@ -1 +0,0 @@
* Björn Rabenstein <beorn@soundcloud.com>

View File

@@ -1,47 +0,0 @@
# Prometheus Go client library
[![Build Status](https://travis-ci.org/prometheus/client_golang.svg?branch=master)](https://travis-ci.org/prometheus/client_golang)
[![Go Report Card](https://goreportcard.com/badge/github.com/prometheus/client_golang)](https://goreportcard.com/report/github.com/prometheus/client_golang)
This is the [Go](http://golang.org) client library for
[Prometheus](http://prometheus.io). It has two separate parts, one for
instrumenting application code, and one for creating clients that talk to the
Prometheus HTTP API.
## Instrumenting applications
[![code-coverage](http://gocover.io/_badge/github.com/prometheus/client_golang/prometheus)](http://gocover.io/github.com/prometheus/client_golang/prometheus) [![go-doc](https://godoc.org/github.com/prometheus/client_golang/prometheus?status.svg)](https://godoc.org/github.com/prometheus/client_golang/prometheus)
The
[`prometheus` directory](https://github.com/prometheus/client_golang/tree/master/prometheus)
contains the instrumentation library. See the
[best practices section](http://prometheus.io/docs/practices/naming/) of the
Prometheus documentation to learn more about instrumenting applications.
The
[`examples` directory](https://github.com/prometheus/client_golang/tree/master/examples)
contains simple examples of instrumented code.
## Client for the Prometheus HTTP API
[![code-coverage](http://gocover.io/_badge/github.com/prometheus/client_golang/api/prometheus)](http://gocover.io/github.com/prometheus/client_golang/api/prometheus) [![go-doc](https://godoc.org/github.com/prometheus/client_golang/api/prometheus?status.svg)](https://godoc.org/github.com/prometheus/client_golang/api/prometheus)
The
[`api/prometheus` directory](https://github.com/prometheus/client_golang/tree/master/api/prometheus)
contains the client for the
[Prometheus HTTP API](http://prometheus.io/docs/querying/api/). It allows you
to write Go applications that query time series data from a Prometheus
server. It is still in alpha stage.
## Where is `model`, `extraction`, and `text`?
The `model` packages has been moved to
[`prometheus/common/model`](https://github.com/prometheus/common/tree/master/model).
The `extraction` and `text` packages are now contained in
[`prometheus/common/expfmt`](https://github.com/prometheus/common/tree/master/expfmt).
## Contributing and community
See the [contributing guidelines](CONTRIBUTING.md) and the
[Community section](http://prometheus.io/community/) of the homepage.

View File

@@ -1 +0,0 @@
0.8.0

View File

@@ -1,131 +0,0 @@
// Copyright 2015 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// +build go1.7
// Package api provides clients for the HTTP APIs.
package api
import (
"context"
"io/ioutil"
"net"
"net/http"
"net/url"
"path"
"strings"
"time"
)
// DefaultRoundTripper is used if no RoundTripper is set in Config.
var DefaultRoundTripper http.RoundTripper = &http.Transport{
Proxy: http.ProxyFromEnvironment,
Dial: (&net.Dialer{
Timeout: 30 * time.Second,
KeepAlive: 30 * time.Second,
}).Dial,
TLSHandshakeTimeout: 10 * time.Second,
}
// Config defines configuration parameters for a new client.
type Config struct {
// The address of the Prometheus to connect to.
Address string
// RoundTripper is used by the Client to drive HTTP requests. If not
// provided, DefaultRoundTripper will be used.
RoundTripper http.RoundTripper
}
func (cfg *Config) roundTripper() http.RoundTripper {
if cfg.RoundTripper == nil {
return DefaultRoundTripper
}
return cfg.RoundTripper
}
// Client is the interface for an API client.
type Client interface {
URL(ep string, args map[string]string) *url.URL
Do(context.Context, *http.Request) (*http.Response, []byte, error)
}
// NewClient returns a new Client.
//
// It is safe to use the returned Client from multiple goroutines.
func NewClient(cfg Config) (Client, error) {
u, err := url.Parse(cfg.Address)
if err != nil {
return nil, err
}
u.Path = strings.TrimRight(u.Path, "/")
return &httpClient{
endpoint: u,
client: http.Client{Transport: cfg.roundTripper()},
}, nil
}
type httpClient struct {
endpoint *url.URL
client http.Client
}
func (c *httpClient) URL(ep string, args map[string]string) *url.URL {
p := path.Join(c.endpoint.Path, ep)
for arg, val := range args {
arg = ":" + arg
p = strings.Replace(p, arg, val, -1)
}
u := *c.endpoint
u.Path = p
return &u
}
func (c *httpClient) Do(ctx context.Context, req *http.Request) (*http.Response, []byte, error) {
if ctx != nil {
req = req.WithContext(ctx)
}
resp, err := c.client.Do(req)
defer func() {
if resp != nil {
resp.Body.Close()
}
}()
if err != nil {
return nil, nil, err
}
var body []byte
done := make(chan struct{})
go func() {
body, err = ioutil.ReadAll(resp.Body)
close(done)
}()
select {
case <-ctx.Done():
err = resp.Body.Close()
<-done
if err == nil {
err = ctx.Err()
}
case <-done:
}
return resp, body, err
}

View File

@@ -1,115 +0,0 @@
// Copyright 2015 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// +build go1.7
package api
import (
"net/http"
"net/url"
"testing"
)
func TestConfig(t *testing.T) {
c := Config{}
if c.roundTripper() != DefaultRoundTripper {
t.Fatalf("expected default roundtripper for nil RoundTripper field")
}
}
func TestClientURL(t *testing.T) {
tests := []struct {
address string
endpoint string
args map[string]string
expected string
}{
{
address: "http://localhost:9090",
endpoint: "/test",
expected: "http://localhost:9090/test",
},
{
address: "http://localhost",
endpoint: "/test",
expected: "http://localhost/test",
},
{
address: "http://localhost:9090",
endpoint: "test",
expected: "http://localhost:9090/test",
},
{
address: "http://localhost:9090/prefix",
endpoint: "/test",
expected: "http://localhost:9090/prefix/test",
},
{
address: "https://localhost:9090/",
endpoint: "/test/",
expected: "https://localhost:9090/test",
},
{
address: "http://localhost:9090",
endpoint: "/test/:param",
args: map[string]string{
"param": "content",
},
expected: "http://localhost:9090/test/content",
},
{
address: "http://localhost:9090",
endpoint: "/test/:param/more/:param",
args: map[string]string{
"param": "content",
},
expected: "http://localhost:9090/test/content/more/content",
},
{
address: "http://localhost:9090",
endpoint: "/test/:param/more/:foo",
args: map[string]string{
"param": "content",
"foo": "bar",
},
expected: "http://localhost:9090/test/content/more/bar",
},
{
address: "http://localhost:9090",
endpoint: "/test/:param",
args: map[string]string{
"nonexistant": "content",
},
expected: "http://localhost:9090/test/:param",
},
}
for _, test := range tests {
ep, err := url.Parse(test.address)
if err != nil {
t.Fatal(err)
}
hclient := &httpClient{
endpoint: ep,
client: http.Client{Transport: DefaultRoundTripper},
}
u := hclient.URL(test.endpoint, test.args)
if u.String() != test.expected {
t.Errorf("unexpected result: got %s, want %s", u, test.expected)
continue
}
}
}

View File

@@ -1,261 +0,0 @@
// Copyright 2017 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// +build go1.7
// Package v1 provides bindings to the Prometheus HTTP API v1:
// http://prometheus.io/docs/querying/api/
package v1
import (
"context"
"encoding/json"
"fmt"
"net/http"
"strconv"
"time"
"github.com/prometheus/client_golang/api"
"github.com/prometheus/common/model"
)
const (
statusAPIError = 422
apiPrefix = "/api/v1"
epQuery = apiPrefix + "/query"
epQueryRange = apiPrefix + "/query_range"
epLabelValues = apiPrefix + "/label/:name/values"
epSeries = apiPrefix + "/series"
)
// ErrorType models the different API error types.
type ErrorType string
// Possible values for ErrorType.
const (
ErrBadData ErrorType = "bad_data"
ErrTimeout = "timeout"
ErrCanceled = "canceled"
ErrExec = "execution"
ErrBadResponse = "bad_response"
)
// Error is an error returned by the API.
type Error struct {
Type ErrorType
Msg string
}
func (e *Error) Error() string {
return fmt.Sprintf("%s: %s", e.Type, e.Msg)
}
// Range represents a sliced time range.
type Range struct {
// The boundaries of the time range.
Start, End time.Time
// The maximum time between two slices within the boundaries.
Step time.Duration
}
// API provides bindings for Prometheus's v1 API.
type API interface {
// Query performs a query for the given time.
Query(ctx context.Context, query string, ts time.Time) (model.Value, error)
// QueryRange performs a query for the given range.
QueryRange(ctx context.Context, query string, r Range) (model.Value, error)
// LabelValues performs a query for the values of the given label.
LabelValues(ctx context.Context, label string) (model.LabelValues, error)
}
// queryResult contains result data for a query.
type queryResult struct {
Type model.ValueType `json:"resultType"`
Result interface{} `json:"result"`
// The decoded value.
v model.Value
}
func (qr *queryResult) UnmarshalJSON(b []byte) error {
v := struct {
Type model.ValueType `json:"resultType"`
Result json.RawMessage `json:"result"`
}{}
err := json.Unmarshal(b, &v)
if err != nil {
return err
}
switch v.Type {
case model.ValScalar:
var sv model.Scalar
err = json.Unmarshal(v.Result, &sv)
qr.v = &sv
case model.ValVector:
var vv model.Vector
err = json.Unmarshal(v.Result, &vv)
qr.v = vv
case model.ValMatrix:
var mv model.Matrix
err = json.Unmarshal(v.Result, &mv)
qr.v = mv
default:
err = fmt.Errorf("unexpected value type %q", v.Type)
}
return err
}
// NewAPI returns a new API for the client.
//
// It is safe to use the returned API from multiple goroutines.
func NewAPI(c api.Client) API {
return &httpAPI{client: apiClient{c}}
}
type httpAPI struct {
client api.Client
}
func (h *httpAPI) Query(ctx context.Context, query string, ts time.Time) (model.Value, error) {
u := h.client.URL(epQuery, nil)
q := u.Query()
q.Set("query", query)
q.Set("time", ts.Format(time.RFC3339Nano))
u.RawQuery = q.Encode()
req, err := http.NewRequest("GET", u.String(), nil)
if err != nil {
return nil, err
}
_, body, err := h.client.Do(ctx, req)
if err != nil {
return nil, err
}
var qres queryResult
err = json.Unmarshal(body, &qres)
return model.Value(qres.v), err
}
func (h *httpAPI) QueryRange(ctx context.Context, query string, r Range) (model.Value, error) {
u := h.client.URL(epQueryRange, nil)
q := u.Query()
var (
start = r.Start.Format(time.RFC3339Nano)
end = r.End.Format(time.RFC3339Nano)
step = strconv.FormatFloat(r.Step.Seconds(), 'f', 3, 64)
)
q.Set("query", query)
q.Set("start", start)
q.Set("end", end)
q.Set("step", step)
u.RawQuery = q.Encode()
req, err := http.NewRequest("GET", u.String(), nil)
if err != nil {
return nil, err
}
_, body, err := h.client.Do(ctx, req)
if err != nil {
return nil, err
}
var qres queryResult
err = json.Unmarshal(body, &qres)
return model.Value(qres.v), err
}
func (h *httpAPI) LabelValues(ctx context.Context, label string) (model.LabelValues, error) {
u := h.client.URL(epLabelValues, map[string]string{"name": label})
req, err := http.NewRequest(http.MethodGet, u.String(), nil)
if err != nil {
return nil, err
}
_, body, err := h.client.Do(ctx, req)
if err != nil {
return nil, err
}
var labelValues model.LabelValues
err = json.Unmarshal(body, &labelValues)
return labelValues, err
}
// apiClient wraps a regular client and processes successful API responses.
// Successful also includes responses that errored at the API level.
type apiClient struct {
api.Client
}
type apiResponse struct {
Status string `json:"status"`
Data json.RawMessage `json:"data"`
ErrorType ErrorType `json:"errorType"`
Error string `json:"error"`
}
func (c apiClient) Do(ctx context.Context, req *http.Request) (*http.Response, []byte, error) {
resp, body, err := c.Client.Do(ctx, req)
if err != nil {
return resp, body, err
}
code := resp.StatusCode
if code/100 != 2 && code != statusAPIError {
return resp, body, &Error{
Type: ErrBadResponse,
Msg: fmt.Sprintf("bad response code %d", resp.StatusCode),
}
}
var result apiResponse
if err = json.Unmarshal(body, &result); err != nil {
return resp, body, &Error{
Type: ErrBadResponse,
Msg: err.Error(),
}
}
if (code == statusAPIError) != (result.Status == "error") {
err = &Error{
Type: ErrBadResponse,
Msg: "inconsistent body for response code",
}
}
if code == statusAPIError && result.Status == "error" {
err = &Error{
Type: result.ErrorType,
Msg: result.Error,
}
}
return resp, []byte(result.Data), err
}

View File

@@ -1,381 +0,0 @@
// Copyright 2017 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// +build go1.7
package v1
import (
"context"
"encoding/json"
"fmt"
"net/http"
"net/url"
"reflect"
"strings"
"testing"
"time"
"github.com/prometheus/common/model"
)
type apiTest struct {
do func() (interface{}, error)
inErr error
inRes interface{}
reqPath string
reqParam url.Values
reqMethod string
res interface{}
err error
}
type apiTestClient struct {
*testing.T
curTest apiTest
}
func (c *apiTestClient) URL(ep string, args map[string]string) *url.URL {
path := ep
for k, v := range args {
path = strings.Replace(path, ":"+k, v, -1)
}
u := &url.URL{
Host: "test:9090",
Path: path,
}
return u
}
func (c *apiTestClient) Do(ctx context.Context, req *http.Request) (*http.Response, []byte, error) {
test := c.curTest
if req.URL.Path != test.reqPath {
c.Errorf("unexpected request path: want %s, got %s", test.reqPath, req.URL.Path)
}
if req.Method != test.reqMethod {
c.Errorf("unexpected request method: want %s, got %s", test.reqMethod, req.Method)
}
b, err := json.Marshal(test.inRes)
if err != nil {
c.Fatal(err)
}
resp := &http.Response{}
if test.inErr != nil {
resp.StatusCode = statusAPIError
} else {
resp.StatusCode = http.StatusOK
}
return resp, b, test.inErr
}
func TestAPIs(t *testing.T) {
testTime := time.Now()
client := &apiTestClient{T: t}
queryAPI := &httpAPI{
client: client,
}
doQuery := func(q string, ts time.Time) func() (interface{}, error) {
return func() (interface{}, error) {
return queryAPI.Query(context.Background(), q, ts)
}
}
doQueryRange := func(q string, rng Range) func() (interface{}, error) {
return func() (interface{}, error) {
return queryAPI.QueryRange(context.Background(), q, rng)
}
}
doLabelValues := func(label string) func() (interface{}, error) {
return func() (interface{}, error) {
return queryAPI.LabelValues(context.Background(), label)
}
}
queryTests := []apiTest{
{
do: doQuery("2", testTime),
inRes: &queryResult{
Type: model.ValScalar,
Result: &model.Scalar{
Value: 2,
Timestamp: model.TimeFromUnix(testTime.Unix()),
},
},
reqMethod: "GET",
reqPath: "/api/v1/query",
reqParam: url.Values{
"query": []string{"2"},
"time": []string{testTime.Format(time.RFC3339Nano)},
},
res: &model.Scalar{
Value: 2,
Timestamp: model.TimeFromUnix(testTime.Unix()),
},
},
{
do: doQuery("2", testTime),
inErr: fmt.Errorf("some error"),
reqMethod: "GET",
reqPath: "/api/v1/query",
reqParam: url.Values{
"query": []string{"2"},
"time": []string{testTime.Format(time.RFC3339Nano)},
},
err: fmt.Errorf("some error"),
},
{
do: doQueryRange("2", Range{
Start: testTime.Add(-time.Minute),
End: testTime,
Step: time.Minute,
}),
inErr: fmt.Errorf("some error"),
reqMethod: "GET",
reqPath: "/api/v1/query_range",
reqParam: url.Values{
"query": []string{"2"},
"start": []string{testTime.Add(-time.Minute).Format(time.RFC3339Nano)},
"end": []string{testTime.Format(time.RFC3339Nano)},
"step": []string{time.Minute.String()},
},
err: fmt.Errorf("some error"),
},
{
do: doLabelValues("mylabel"),
inRes: []string{"val1", "val2"},
reqMethod: "GET",
reqPath: "/api/v1/label/mylabel/values",
res: model.LabelValues{"val1", "val2"},
},
{
do: doLabelValues("mylabel"),
inErr: fmt.Errorf("some error"),
reqMethod: "GET",
reqPath: "/api/v1/label/mylabel/values",
err: fmt.Errorf("some error"),
},
}
var tests []apiTest
tests = append(tests, queryTests...)
for _, test := range tests {
client.curTest = test
res, err := test.do()
if test.err != nil {
if err == nil {
t.Errorf("expected error %q but got none", test.err)
continue
}
if err.Error() != test.err.Error() {
t.Errorf("unexpected error: want %s, got %s", test.err, err)
}
continue
}
if err != nil {
t.Errorf("unexpected error: %s", err)
continue
}
if !reflect.DeepEqual(res, test.res) {
t.Errorf("unexpected result: want %v, got %v", test.res, res)
}
}
}
type testClient struct {
*testing.T
ch chan apiClientTest
req *http.Request
}
type apiClientTest struct {
code int
response interface{}
expected string
err *Error
}
func (c *testClient) URL(ep string, args map[string]string) *url.URL {
return nil
}
func (c *testClient) Do(ctx context.Context, req *http.Request) (*http.Response, []byte, error) {
if ctx == nil {
c.Fatalf("context was not passed down")
}
if req != c.req {
c.Fatalf("request was not passed down")
}
test := <-c.ch
var b []byte
var err error
switch v := test.response.(type) {
case string:
b = []byte(v)
default:
b, err = json.Marshal(v)
if err != nil {
c.Fatal(err)
}
}
resp := &http.Response{
StatusCode: test.code,
}
return resp, b, nil
}
func TestAPIClientDo(t *testing.T) {
tests := []apiClientTest{
{
response: &apiResponse{
Status: "error",
Data: json.RawMessage(`null`),
ErrorType: ErrBadData,
Error: "failed",
},
err: &Error{
Type: ErrBadData,
Msg: "failed",
},
code: statusAPIError,
expected: `null`,
},
{
response: &apiResponse{
Status: "error",
Data: json.RawMessage(`"test"`),
ErrorType: ErrTimeout,
Error: "timed out",
},
err: &Error{
Type: ErrTimeout,
Msg: "timed out",
},
code: statusAPIError,
expected: `test`,
},
{
response: "bad json",
err: &Error{
Type: ErrBadResponse,
Msg: "bad response code 400",
},
code: http.StatusBadRequest,
},
{
response: "bad json",
err: &Error{
Type: ErrBadResponse,
Msg: "invalid character 'b' looking for beginning of value",
},
code: statusAPIError,
},
{
response: &apiResponse{
Status: "success",
Data: json.RawMessage(`"test"`),
},
err: &Error{
Type: ErrBadResponse,
Msg: "inconsistent body for response code",
},
code: statusAPIError,
},
{
response: &apiResponse{
Status: "success",
Data: json.RawMessage(`"test"`),
ErrorType: ErrTimeout,
Error: "timed out",
},
err: &Error{
Type: ErrBadResponse,
Msg: "inconsistent body for response code",
},
code: statusAPIError,
},
{
response: &apiResponse{
Status: "error",
Data: json.RawMessage(`"test"`),
ErrorType: ErrTimeout,
Error: "timed out",
},
err: &Error{
Type: ErrBadResponse,
Msg: "inconsistent body for response code",
},
code: http.StatusOK,
},
}
tc := &testClient{
T: t,
ch: make(chan apiClientTest, 1),
req: &http.Request{},
}
client := &apiClient{tc}
for _, test := range tests {
tc.ch <- test
_, body, err := client.Do(context.Background(), tc.req)
if test.err != nil {
if err == nil {
t.Errorf("expected error %q but got none", test.err)
continue
}
if test.err.Error() != err.Error() {
t.Errorf("unexpected error: want %q, got %q", test.err, err)
}
continue
}
if err != nil {
t.Errorf("unexpeceted error %s", err)
continue
}
want, got := test.expected, string(body)
if want != got {
t.Errorf("unexpected body: want %q, got %q", want, got)
}
}
}

View File

@@ -1,106 +0,0 @@
// Copyright 2015 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// A simple example exposing fictional RPC latencies with different types of
// random distributions (uniform, normal, and exponential) as Prometheus
// metrics.
package main
import (
"flag"
"log"
"math"
"math/rand"
"net/http"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
)
var (
addr = flag.String("listen-address", ":8080", "The address to listen on for HTTP requests.")
uniformDomain = flag.Float64("uniform.domain", 0.0002, "The domain for the uniform distribution.")
normDomain = flag.Float64("normal.domain", 0.0002, "The domain for the normal distribution.")
normMean = flag.Float64("normal.mean", 0.00001, "The mean for the normal distribution.")
oscillationPeriod = flag.Duration("oscillation-period", 10*time.Minute, "The duration of the rate oscillation period.")
)
var (
// Create a summary to track fictional interservice RPC latencies for three
// distinct services with different latency distributions. These services are
// differentiated via a "service" label.
rpcDurations = prometheus.NewSummaryVec(
prometheus.SummaryOpts{
Name: "rpc_durations_seconds",
Help: "RPC latency distributions.",
Objectives: map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001},
},
[]string{"service"},
)
// The same as above, but now as a histogram, and only for the normal
// distribution. The buckets are targeted to the parameters of the
// normal distribution, with 20 buckets centered on the mean, each
// half-sigma wide.
rpcDurationsHistogram = prometheus.NewHistogram(prometheus.HistogramOpts{
Name: "rpc_durations_histogram_seconds",
Help: "RPC latency distributions.",
Buckets: prometheus.LinearBuckets(*normMean-5**normDomain, .5**normDomain, 20),
})
)
func init() {
// Register the summary and the histogram with Prometheus's default registry.
prometheus.MustRegister(rpcDurations)
prometheus.MustRegister(rpcDurationsHistogram)
}
func main() {
flag.Parse()
start := time.Now()
oscillationFactor := func() float64 {
return 2 + math.Sin(math.Sin(2*math.Pi*float64(time.Since(start))/float64(*oscillationPeriod)))
}
// Periodically record some sample latencies for the three services.
go func() {
for {
v := rand.Float64() * *uniformDomain
rpcDurations.WithLabelValues("uniform").Observe(v)
time.Sleep(time.Duration(100*oscillationFactor()) * time.Millisecond)
}
}()
go func() {
for {
v := (rand.NormFloat64() * *normDomain) + *normMean
rpcDurations.WithLabelValues("normal").Observe(v)
rpcDurationsHistogram.Observe(v)
time.Sleep(time.Duration(75*oscillationFactor()) * time.Millisecond)
}
}()
go func() {
for {
v := rand.ExpFloat64() / 1e6
rpcDurations.WithLabelValues("exponential").Observe(v)
time.Sleep(time.Duration(50*oscillationFactor()) * time.Millisecond)
}
}()
// Expose the registered metrics via HTTP.
http.Handle("/metrics", promhttp.Handler())
log.Fatal(http.ListenAndServe(*addr, nil))
}

View File

@@ -1,31 +0,0 @@
// Copyright 2015 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// A minimal example of how to include Prometheus instrumentation.
package main
import (
"flag"
"log"
"net/http"
"github.com/prometheus/client_golang/prometheus/promhttp"
)
var addr = flag.String("listen-address", ":8080", "The address to listen on for HTTP requests.")
func main() {
flag.Parse()
http.Handle("/metrics", promhttp.Handler())
log.Fatal(http.ListenAndServe(*addr, nil))
}

View File

@@ -1,185 +0,0 @@
// Copyright 2014 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus
import (
"sync"
"testing"
)
func BenchmarkCounterWithLabelValues(b *testing.B) {
m := NewCounterVec(
CounterOpts{
Name: "benchmark_counter",
Help: "A counter to benchmark it.",
},
[]string{"one", "two", "three"},
)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
m.WithLabelValues("eins", "zwei", "drei").Inc()
}
}
func BenchmarkCounterWithLabelValuesConcurrent(b *testing.B) {
m := NewCounterVec(
CounterOpts{
Name: "benchmark_counter",
Help: "A counter to benchmark it.",
},
[]string{"one", "two", "three"},
)
b.ReportAllocs()
b.ResetTimer()
wg := sync.WaitGroup{}
for i := 0; i < 10; i++ {
wg.Add(1)
go func() {
for j := 0; j < b.N/10; j++ {
m.WithLabelValues("eins", "zwei", "drei").Inc()
}
wg.Done()
}()
}
wg.Wait()
}
func BenchmarkCounterWithMappedLabels(b *testing.B) {
m := NewCounterVec(
CounterOpts{
Name: "benchmark_counter",
Help: "A counter to benchmark it.",
},
[]string{"one", "two", "three"},
)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
m.With(Labels{"two": "zwei", "one": "eins", "three": "drei"}).Inc()
}
}
func BenchmarkCounterWithPreparedMappedLabels(b *testing.B) {
m := NewCounterVec(
CounterOpts{
Name: "benchmark_counter",
Help: "A counter to benchmark it.",
},
[]string{"one", "two", "three"},
)
b.ReportAllocs()
b.ResetTimer()
labels := Labels{"two": "zwei", "one": "eins", "three": "drei"}
for i := 0; i < b.N; i++ {
m.With(labels).Inc()
}
}
func BenchmarkCounterNoLabels(b *testing.B) {
m := NewCounter(CounterOpts{
Name: "benchmark_counter",
Help: "A counter to benchmark it.",
})
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
m.Inc()
}
}
func BenchmarkGaugeWithLabelValues(b *testing.B) {
m := NewGaugeVec(
GaugeOpts{
Name: "benchmark_gauge",
Help: "A gauge to benchmark it.",
},
[]string{"one", "two", "three"},
)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
m.WithLabelValues("eins", "zwei", "drei").Set(3.1415)
}
}
func BenchmarkGaugeNoLabels(b *testing.B) {
m := NewGauge(GaugeOpts{
Name: "benchmark_gauge",
Help: "A gauge to benchmark it.",
})
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
m.Set(3.1415)
}
}
func BenchmarkSummaryWithLabelValues(b *testing.B) {
m := NewSummaryVec(
SummaryOpts{
Name: "benchmark_summary",
Help: "A summary to benchmark it.",
Objectives: map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001},
},
[]string{"one", "two", "three"},
)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
m.WithLabelValues("eins", "zwei", "drei").Observe(3.1415)
}
}
func BenchmarkSummaryNoLabels(b *testing.B) {
m := NewSummary(SummaryOpts{
Name: "benchmark_summary",
Help: "A summary to benchmark it.",
Objectives: map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001},
},
)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
m.Observe(3.1415)
}
}
func BenchmarkHistogramWithLabelValues(b *testing.B) {
m := NewHistogramVec(
HistogramOpts{
Name: "benchmark_histogram",
Help: "A histogram to benchmark it.",
},
[]string{"one", "two", "three"},
)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
m.WithLabelValues("eins", "zwei", "drei").Observe(3.1415)
}
}
func BenchmarkHistogramNoLabels(b *testing.B) {
m := NewHistogram(HistogramOpts{
Name: "benchmark_histogram",
Help: "A histogram to benchmark it.",
},
)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
m.Observe(3.1415)
}
}

View File

@@ -1,114 +0,0 @@
// Copyright 2014 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus
import (
"fmt"
"math"
"testing"
dto "github.com/prometheus/client_model/go"
)
func TestCounterAdd(t *testing.T) {
counter := NewCounter(CounterOpts{
Name: "test",
Help: "test help",
ConstLabels: Labels{"a": "1", "b": "2"},
}).(*counter)
counter.Inc()
if expected, got := 1., math.Float64frombits(counter.valBits); expected != got {
t.Errorf("Expected %f, got %f.", expected, got)
}
counter.Add(42)
if expected, got := 43., math.Float64frombits(counter.valBits); expected != got {
t.Errorf("Expected %f, got %f.", expected, got)
}
if expected, got := "counter cannot decrease in value", decreaseCounter(counter).Error(); expected != got {
t.Errorf("Expected error %q, got %q.", expected, got)
}
m := &dto.Metric{}
counter.Write(m)
if expected, got := `label:<name:"a" value:"1" > label:<name:"b" value:"2" > counter:<value:43 > `, m.String(); expected != got {
t.Errorf("expected %q, got %q", expected, got)
}
}
func decreaseCounter(c *counter) (err error) {
defer func() {
if e := recover(); e != nil {
err = e.(error)
}
}()
c.Add(-1)
return nil
}
func TestCounterVecGetMetricWithInvalidLabelValues(t *testing.T) {
testCases := []struct {
desc string
labels Labels
}{
{
desc: "non utf8 label value",
labels: Labels{"a": "\xFF"},
},
{
desc: "not enough label values",
labels: Labels{},
},
{
desc: "too many label values",
labels: Labels{"a": "1", "b": "2"},
},
}
for _, test := range testCases {
counterVec := NewCounterVec(CounterOpts{
Name: "test",
}, []string{"a"})
labelValues := make([]string, len(test.labels))
for _, val := range test.labels {
labelValues = append(labelValues, val)
}
expectPanic(t, func() {
counterVec.WithLabelValues(labelValues...)
}, fmt.Sprintf("WithLabelValues: expected panic because: %s", test.desc))
expectPanic(t, func() {
counterVec.With(test.labels)
}, fmt.Sprintf("WithLabelValues: expected panic because: %s", test.desc))
if _, err := counterVec.GetMetricWithLabelValues(labelValues...); err == nil {
t.Errorf("GetMetricWithLabelValues: expected error because: %s", test.desc)
}
if _, err := counterVec.GetMetricWith(test.labels); err == nil {
t.Errorf("GetMetricWith: expected error because: %s", test.desc)
}
}
}
func expectPanic(t *testing.T, op func(), errorMsg string) {
defer func() {
if err := recover(); err == nil {
t.Error(errorMsg)
}
}()
op()
}

View File

@@ -1,17 +0,0 @@
package prometheus
import (
"testing"
)
func TestNewDescInvalidLabelValues(t *testing.T) {
desc := NewDesc(
"sample_label",
"sample label",
nil,
Labels{"a": "\xFF"},
)
if desc.err == nil {
t.Errorf("NewDesc: expected error because: %s", desc.err)
}
}

View File

@@ -1,118 +0,0 @@
// Copyright 2014 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus_test
import "github.com/prometheus/client_golang/prometheus"
// ClusterManager is an example for a system that might have been built without
// Prometheus in mind. It models a central manager of jobs running in a
// cluster. To turn it into something that collects Prometheus metrics, we
// simply add the two methods required for the Collector interface.
//
// An additional challenge is that multiple instances of the ClusterManager are
// run within the same binary, each in charge of a different zone. We need to
// make use of ConstLabels to be able to register each ClusterManager instance
// with Prometheus.
type ClusterManager struct {
Zone string
OOMCountDesc *prometheus.Desc
RAMUsageDesc *prometheus.Desc
// ... many more fields
}
// ReallyExpensiveAssessmentOfTheSystemState is a mock for the data gathering a
// real cluster manager would have to do. Since it may actually be really
// expensive, it must only be called once per collection. This implementation,
// obviously, only returns some made-up data.
func (c *ClusterManager) ReallyExpensiveAssessmentOfTheSystemState() (
oomCountByHost map[string]int, ramUsageByHost map[string]float64,
) {
// Just example fake data.
oomCountByHost = map[string]int{
"foo.example.org": 42,
"bar.example.org": 2001,
}
ramUsageByHost = map[string]float64{
"foo.example.org": 6.023e23,
"bar.example.org": 3.14,
}
return
}
// Describe simply sends the two Descs in the struct to the channel.
func (c *ClusterManager) Describe(ch chan<- *prometheus.Desc) {
ch <- c.OOMCountDesc
ch <- c.RAMUsageDesc
}
// Collect first triggers the ReallyExpensiveAssessmentOfTheSystemState. Then it
// creates constant metrics for each host on the fly based on the returned data.
//
// Note that Collect could be called concurrently, so we depend on
// ReallyExpensiveAssessmentOfTheSystemState to be concurrency-safe.
func (c *ClusterManager) Collect(ch chan<- prometheus.Metric) {
oomCountByHost, ramUsageByHost := c.ReallyExpensiveAssessmentOfTheSystemState()
for host, oomCount := range oomCountByHost {
ch <- prometheus.MustNewConstMetric(
c.OOMCountDesc,
prometheus.CounterValue,
float64(oomCount),
host,
)
}
for host, ramUsage := range ramUsageByHost {
ch <- prometheus.MustNewConstMetric(
c.RAMUsageDesc,
prometheus.GaugeValue,
ramUsage,
host,
)
}
}
// NewClusterManager creates the two Descs OOMCountDesc and RAMUsageDesc. Note
// that the zone is set as a ConstLabel. (It's different in each instance of the
// ClusterManager, but constant over the lifetime of an instance.) Then there is
// a variable label "host", since we want to partition the collected metrics by
// host. Since all Descs created in this way are consistent across instances,
// with a guaranteed distinction by the "zone" label, we can register different
// ClusterManager instances with the same registry.
func NewClusterManager(zone string) *ClusterManager {
return &ClusterManager{
Zone: zone,
OOMCountDesc: prometheus.NewDesc(
"clustermanager_oom_crashes_total",
"Number of OOM crashes.",
[]string{"host"},
prometheus.Labels{"zone": zone},
),
RAMUsageDesc: prometheus.NewDesc(
"clustermanager_ram_usage_bytes",
"RAM usage as reported to the cluster manager.",
[]string{"host"},
prometheus.Labels{"zone": zone},
),
}
}
func ExampleCollector() {
workerDB := NewClusterManager("db")
workerCA := NewClusterManager("ca")
// Since we are dealing with custom Collector implementations, it might
// be a good idea to try it out with a pedantic registry.
reg := prometheus.NewPedanticRegistry()
reg.MustRegister(workerDB)
reg.MustRegister(workerCA)
}

View File

@@ -1,71 +0,0 @@
// Copyright 2014 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus_test
import (
"net/http"
"github.com/prometheus/client_golang/prometheus"
)
var (
// apiRequestDuration tracks the duration separate for each HTTP status
// class (1xx, 2xx, ...). This creates a fair amount of time series on
// the Prometheus server. Usually, you would track the duration of
// serving HTTP request without partitioning by outcome. Do something
// like this only if needed. Also note how only status classes are
// tracked, not every single status code. The latter would create an
// even larger amount of time series. Request counters partitioned by
// status code are usually OK as each counter only creates one time
// series. Histograms are way more expensive, so partition with care and
// only where you really need separate latency tracking. Partitioning by
// status class is only an example. In concrete cases, other partitions
// might make more sense.
apiRequestDuration = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "api_request_duration_seconds",
Help: "Histogram for the request duration of the public API, partitioned by status class.",
Buckets: prometheus.ExponentialBuckets(0.1, 1.5, 5),
},
[]string{"status_class"},
)
)
func handler(w http.ResponseWriter, r *http.Request) {
status := http.StatusOK
// The ObserverFunc gets called by the deferred ObserveDuration and
// decides which Histogram's Observe method is called.
timer := prometheus.NewTimer(prometheus.ObserverFunc(func(v float64) {
switch {
case status >= 500: // Server error.
apiRequestDuration.WithLabelValues("5xx").Observe(v)
case status >= 400: // Client error.
apiRequestDuration.WithLabelValues("4xx").Observe(v)
case status >= 300: // Redirection.
apiRequestDuration.WithLabelValues("3xx").Observe(v)
case status >= 200: // Success.
apiRequestDuration.WithLabelValues("2xx").Observe(v)
default: // Informational.
apiRequestDuration.WithLabelValues("1xx").Observe(v)
}
}))
defer timer.ObserveDuration()
// Handle the request. Set status accordingly.
// ...
}
func ExampleTimer_complex() {
http.HandleFunc("/api", handler)
}

View File

@@ -1,48 +0,0 @@
// Copyright 2014 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus_test
import (
"os"
"github.com/prometheus/client_golang/prometheus"
)
var (
// If a function is called rarely (i.e. not more often than scrapes
// happen) or ideally only once (like in a batch job), it can make sense
// to use a Gauge for timing the function call. For timing a batch job
// and pushing the result to a Pushgateway, see also the comprehensive
// example in the push package.
funcDuration = prometheus.NewGauge(prometheus.GaugeOpts{
Name: "example_function_duration_seconds",
Help: "Duration of the last call of an example function.",
})
)
func run() error {
// The Set method of the Gauge is used to observe the duration.
timer := prometheus.NewTimer(prometheus.ObserverFunc(funcDuration.Set))
defer timer.ObserveDuration()
// Do something. Return errors as encountered. The use of 'defer' above
// makes sure the function is still timed properly.
return nil
}
func ExampleTimer_gauge() {
if err := run(); err != nil {
os.Exit(1)
}
}

View File

@@ -1,40 +0,0 @@
// Copyright 2014 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus_test
import (
"math/rand"
"time"
"github.com/prometheus/client_golang/prometheus"
)
var (
requestDuration = prometheus.NewHistogram(prometheus.HistogramOpts{
Name: "example_request_duration_seconds",
Help: "Histogram for the runtime of a simple example function.",
Buckets: prometheus.LinearBuckets(0.01, 0.01, 10),
})
)
func ExampleTimer() {
// timer times this example function. It uses a Histogram, but a Summary
// would also work, as both implement Observer. Check out
// https://prometheus.io/docs/practices/histograms/ for differences.
timer := prometheus.NewTimer(requestDuration)
defer timer.ObserveDuration()
// Do something here that takes time.
time.Sleep(time.Duration(rand.NormFloat64()*10000+50000) * time.Microsecond)
}

View File

@@ -1,754 +0,0 @@
// Copyright 2014 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus_test
import (
"bytes"
"fmt"
"math"
"net/http"
"runtime"
"sort"
"strings"
dto "github.com/prometheus/client_model/go"
"github.com/prometheus/common/expfmt"
"github.com/golang/protobuf/proto"
"github.com/prometheus/client_golang/prometheus"
)
func ExampleGauge() {
opsQueued := prometheus.NewGauge(prometheus.GaugeOpts{
Namespace: "our_company",
Subsystem: "blob_storage",
Name: "ops_queued",
Help: "Number of blob storage operations waiting to be processed.",
})
prometheus.MustRegister(opsQueued)
// 10 operations queued by the goroutine managing incoming requests.
opsQueued.Add(10)
// A worker goroutine has picked up a waiting operation.
opsQueued.Dec()
// And once more...
opsQueued.Dec()
}
func ExampleGaugeVec() {
opsQueued := prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Namespace: "our_company",
Subsystem: "blob_storage",
Name: "ops_queued",
Help: "Number of blob storage operations waiting to be processed, partitioned by user and type.",
},
[]string{
// Which user has requested the operation?
"user",
// Of what type is the operation?
"type",
},
)
prometheus.MustRegister(opsQueued)
// Increase a value using compact (but order-sensitive!) WithLabelValues().
opsQueued.WithLabelValues("bob", "put").Add(4)
// Increase a value with a map using WithLabels. More verbose, but order
// doesn't matter anymore.
opsQueued.With(prometheus.Labels{"type": "delete", "user": "alice"}).Inc()
}
func ExampleGaugeFunc() {
if err := prometheus.Register(prometheus.NewGaugeFunc(
prometheus.GaugeOpts{
Subsystem: "runtime",
Name: "goroutines_count",
Help: "Number of goroutines that currently exist.",
},
func() float64 { return float64(runtime.NumGoroutine()) },
)); err == nil {
fmt.Println("GaugeFunc 'goroutines_count' registered.")
}
// Note that the count of goroutines is a gauge (and not a counter) as
// it can go up and down.
// Output:
// GaugeFunc 'goroutines_count' registered.
}
func ExampleCounter() {
pushCounter := prometheus.NewCounter(prometheus.CounterOpts{
Name: "repository_pushes", // Note: No help string...
})
err := prometheus.Register(pushCounter) // ... so this will return an error.
if err != nil {
fmt.Println("Push counter couldn't be registered, no counting will happen:", err)
return
}
// Try it once more, this time with a help string.
pushCounter = prometheus.NewCounter(prometheus.CounterOpts{
Name: "repository_pushes",
Help: "Number of pushes to external repository.",
})
err = prometheus.Register(pushCounter)
if err != nil {
fmt.Println("Push counter couldn't be registered AGAIN, no counting will happen:", err)
return
}
pushComplete := make(chan struct{})
// TODO: Start a goroutine that performs repository pushes and reports
// each completion via the channel.
for range pushComplete {
pushCounter.Inc()
}
// Output:
// Push counter couldn't be registered, no counting will happen: descriptor Desc{fqName: "repository_pushes", help: "", constLabels: {}, variableLabels: []} is invalid: empty help string
}
func ExampleCounterVec() {
httpReqs := prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "http_requests_total",
Help: "How many HTTP requests processed, partitioned by status code and HTTP method.",
},
[]string{"code", "method"},
)
prometheus.MustRegister(httpReqs)
httpReqs.WithLabelValues("404", "POST").Add(42)
// If you have to access the same set of labels very frequently, it
// might be good to retrieve the metric only once and keep a handle to
// it. But beware of deletion of that metric, see below!
m := httpReqs.WithLabelValues("200", "GET")
for i := 0; i < 1000000; i++ {
m.Inc()
}
// Delete a metric from the vector. If you have previously kept a handle
// to that metric (as above), future updates via that handle will go
// unseen (even if you re-create a metric with the same label set
// later).
httpReqs.DeleteLabelValues("200", "GET")
// Same thing with the more verbose Labels syntax.
httpReqs.Delete(prometheus.Labels{"method": "GET", "code": "200"})
}
func ExampleInstrumentHandler() {
// Handle the "/doc" endpoint with the standard http.FileServer handler.
// By wrapping the handler with InstrumentHandler, request count,
// request and response sizes, and request latency are automatically
// exported to Prometheus, partitioned by HTTP status code and method
// and by the handler name (here "fileserver").
http.Handle("/doc", prometheus.InstrumentHandler(
"fileserver", http.FileServer(http.Dir("/usr/share/doc")),
))
// The Prometheus handler still has to be registered to handle the
// "/metrics" endpoint. The handler returned by prometheus.Handler() is
// already instrumented - with "prometheus" as the handler name. In this
// example, we want the handler name to be "metrics", so we instrument
// the uninstrumented Prometheus handler ourselves.
http.Handle("/metrics", prometheus.InstrumentHandler(
"metrics", prometheus.UninstrumentedHandler(),
))
}
func ExampleLabelPairSorter() {
labelPairs := []*dto.LabelPair{
{Name: proto.String("status"), Value: proto.String("404")},
{Name: proto.String("method"), Value: proto.String("get")},
}
sort.Sort(prometheus.LabelPairSorter(labelPairs))
fmt.Println(labelPairs)
// Output:
// [name:"method" value:"get" name:"status" value:"404" ]
}
func ExampleRegister() {
// Imagine you have a worker pool and want to count the tasks completed.
taskCounter := prometheus.NewCounter(prometheus.CounterOpts{
Subsystem: "worker_pool",
Name: "completed_tasks_total",
Help: "Total number of tasks completed.",
})
// This will register fine.
if err := prometheus.Register(taskCounter); err != nil {
fmt.Println(err)
} else {
fmt.Println("taskCounter registered.")
}
// Don't forget to tell the HTTP server about the Prometheus handler.
// (In a real program, you still need to start the HTTP server...)
http.Handle("/metrics", prometheus.Handler())
// Now you can start workers and give every one of them a pointer to
// taskCounter and let it increment it whenever it completes a task.
taskCounter.Inc() // This has to happen somewhere in the worker code.
// But wait, you want to see how individual workers perform. So you need
// a vector of counters, with one element for each worker.
taskCounterVec := prometheus.NewCounterVec(
prometheus.CounterOpts{
Subsystem: "worker_pool",
Name: "completed_tasks_total",
Help: "Total number of tasks completed.",
},
[]string{"worker_id"},
)
// Registering will fail because we already have a metric of that name.
if err := prometheus.Register(taskCounterVec); err != nil {
fmt.Println("taskCounterVec not registered:", err)
} else {
fmt.Println("taskCounterVec registered.")
}
// To fix, first unregister the old taskCounter.
if prometheus.Unregister(taskCounter) {
fmt.Println("taskCounter unregistered.")
}
// Try registering taskCounterVec again.
if err := prometheus.Register(taskCounterVec); err != nil {
fmt.Println("taskCounterVec not registered:", err)
} else {
fmt.Println("taskCounterVec registered.")
}
// Bummer! Still doesn't work.
// Prometheus will not allow you to ever export metrics with
// inconsistent help strings or label names. After unregistering, the
// unregistered metrics will cease to show up in the /metrics HTTP
// response, but the registry still remembers that those metrics had
// been exported before. For this example, we will now choose a
// different name. (In a real program, you would obviously not export
// the obsolete metric in the first place.)
taskCounterVec = prometheus.NewCounterVec(
prometheus.CounterOpts{
Subsystem: "worker_pool",
Name: "completed_tasks_by_id",
Help: "Total number of tasks completed.",
},
[]string{"worker_id"},
)
if err := prometheus.Register(taskCounterVec); err != nil {
fmt.Println("taskCounterVec not registered:", err)
} else {
fmt.Println("taskCounterVec registered.")
}
// Finally it worked!
// The workers have to tell taskCounterVec their id to increment the
// right element in the metric vector.
taskCounterVec.WithLabelValues("42").Inc() // Code from worker 42.
// Each worker could also keep a reference to their own counter element
// around. Pick the counter at initialization time of the worker.
myCounter := taskCounterVec.WithLabelValues("42") // From worker 42 initialization code.
myCounter.Inc() // Somewhere in the code of that worker.
// Note that something like WithLabelValues("42", "spurious arg") would
// panic (because you have provided too many label values). If you want
// to get an error instead, use GetMetricWithLabelValues(...) instead.
notMyCounter, err := taskCounterVec.GetMetricWithLabelValues("42", "spurious arg")
if err != nil {
fmt.Println("Worker initialization failed:", err)
}
if notMyCounter == nil {
fmt.Println("notMyCounter is nil.")
}
// A different (and somewhat tricky) approach is to use
// ConstLabels. ConstLabels are pairs of label names and label values
// that never change. You might ask what those labels are good for (and
// rightfully so - if they never change, they could as well be part of
// the metric name). There are essentially two use-cases: The first is
// if labels are constant throughout the lifetime of a binary execution,
// but they vary over time or between different instances of a running
// binary. The second is what we have here: Each worker creates and
// registers an own Counter instance where the only difference is in the
// value of the ConstLabels. Those Counters can all be registered
// because the different ConstLabel values guarantee that each worker
// will increment a different Counter metric.
counterOpts := prometheus.CounterOpts{
Subsystem: "worker_pool",
Name: "completed_tasks",
Help: "Total number of tasks completed.",
ConstLabels: prometheus.Labels{"worker_id": "42"},
}
taskCounterForWorker42 := prometheus.NewCounter(counterOpts)
if err := prometheus.Register(taskCounterForWorker42); err != nil {
fmt.Println("taskCounterVForWorker42 not registered:", err)
} else {
fmt.Println("taskCounterForWorker42 registered.")
}
// Obviously, in real code, taskCounterForWorker42 would be a member
// variable of a worker struct, and the "42" would be retrieved with a
// GetId() method or something. The Counter would be created and
// registered in the initialization code of the worker.
// For the creation of the next Counter, we can recycle
// counterOpts. Just change the ConstLabels.
counterOpts.ConstLabels = prometheus.Labels{"worker_id": "2001"}
taskCounterForWorker2001 := prometheus.NewCounter(counterOpts)
if err := prometheus.Register(taskCounterForWorker2001); err != nil {
fmt.Println("taskCounterVForWorker2001 not registered:", err)
} else {
fmt.Println("taskCounterForWorker2001 registered.")
}
taskCounterForWorker2001.Inc()
taskCounterForWorker42.Inc()
taskCounterForWorker2001.Inc()
// Yet another approach would be to turn the workers themselves into
// Collectors and register them. See the Collector example for details.
// Output:
// taskCounter registered.
// taskCounterVec not registered: a previously registered descriptor with the same fully-qualified name as Desc{fqName: "worker_pool_completed_tasks_total", help: "Total number of tasks completed.", constLabels: {}, variableLabels: [worker_id]} has different label names or a different help string
// taskCounter unregistered.
// taskCounterVec not registered: a previously registered descriptor with the same fully-qualified name as Desc{fqName: "worker_pool_completed_tasks_total", help: "Total number of tasks completed.", constLabels: {}, variableLabels: [worker_id]} has different label names or a different help string
// taskCounterVec registered.
// Worker initialization failed: inconsistent label cardinality
// notMyCounter is nil.
// taskCounterForWorker42 registered.
// taskCounterForWorker2001 registered.
}
func ExampleSummary() {
temps := prometheus.NewSummary(prometheus.SummaryOpts{
Name: "pond_temperature_celsius",
Help: "The temperature of the frog pond.",
Objectives: map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001},
})
// Simulate some observations.
for i := 0; i < 1000; i++ {
temps.Observe(30 + math.Floor(120*math.Sin(float64(i)*0.1))/10)
}
// Just for demonstration, let's check the state of the summary by
// (ab)using its Write method (which is usually only used by Prometheus
// internally).
metric := &dto.Metric{}
temps.Write(metric)
fmt.Println(proto.MarshalTextString(metric))
// Output:
// summary: <
// sample_count: 1000
// sample_sum: 29969.50000000001
// quantile: <
// quantile: 0.5
// value: 31.1
// >
// quantile: <
// quantile: 0.9
// value: 41.3
// >
// quantile: <
// quantile: 0.99
// value: 41.9
// >
// >
}
func ExampleSummaryVec() {
temps := prometheus.NewSummaryVec(
prometheus.SummaryOpts{
Name: "pond_temperature_celsius",
Help: "The temperature of the frog pond.",
Objectives: map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001},
},
[]string{"species"},
)
// Simulate some observations.
for i := 0; i < 1000; i++ {
temps.WithLabelValues("litoria-caerulea").Observe(30 + math.Floor(120*math.Sin(float64(i)*0.1))/10)
temps.WithLabelValues("lithobates-catesbeianus").Observe(32 + math.Floor(100*math.Cos(float64(i)*0.11))/10)
}
// Create a Summary without any observations.
temps.WithLabelValues("leiopelma-hochstetteri")
// Just for demonstration, let's check the state of the summary vector
// by registering it with a custom registry and then let it collect the
// metrics.
reg := prometheus.NewRegistry()
reg.MustRegister(temps)
metricFamilies, err := reg.Gather()
if err != nil || len(metricFamilies) != 1 {
panic("unexpected behavior of custom test registry")
}
fmt.Println(proto.MarshalTextString(metricFamilies[0]))
// Output:
// name: "pond_temperature_celsius"
// help: "The temperature of the frog pond."
// type: SUMMARY
// metric: <
// label: <
// name: "species"
// value: "leiopelma-hochstetteri"
// >
// summary: <
// sample_count: 0
// sample_sum: 0
// quantile: <
// quantile: 0.5
// value: nan
// >
// quantile: <
// quantile: 0.9
// value: nan
// >
// quantile: <
// quantile: 0.99
// value: nan
// >
// >
// >
// metric: <
// label: <
// name: "species"
// value: "lithobates-catesbeianus"
// >
// summary: <
// sample_count: 1000
// sample_sum: 31956.100000000017
// quantile: <
// quantile: 0.5
// value: 32.4
// >
// quantile: <
// quantile: 0.9
// value: 41.4
// >
// quantile: <
// quantile: 0.99
// value: 41.9
// >
// >
// >
// metric: <
// label: <
// name: "species"
// value: "litoria-caerulea"
// >
// summary: <
// sample_count: 1000
// sample_sum: 29969.50000000001
// quantile: <
// quantile: 0.5
// value: 31.1
// >
// quantile: <
// quantile: 0.9
// value: 41.3
// >
// quantile: <
// quantile: 0.99
// value: 41.9
// >
// >
// >
}
func ExampleNewConstSummary() {
desc := prometheus.NewDesc(
"http_request_duration_seconds",
"A summary of the HTTP request durations.",
[]string{"code", "method"},
prometheus.Labels{"owner": "example"},
)
// Create a constant summary from values we got from a 3rd party telemetry system.
s := prometheus.MustNewConstSummary(
desc,
4711, 403.34,
map[float64]float64{0.5: 42.3, 0.9: 323.3},
"200", "get",
)
// Just for demonstration, let's check the state of the summary by
// (ab)using its Write method (which is usually only used by Prometheus
// internally).
metric := &dto.Metric{}
s.Write(metric)
fmt.Println(proto.MarshalTextString(metric))
// Output:
// label: <
// name: "code"
// value: "200"
// >
// label: <
// name: "method"
// value: "get"
// >
// label: <
// name: "owner"
// value: "example"
// >
// summary: <
// sample_count: 4711
// sample_sum: 403.34
// quantile: <
// quantile: 0.5
// value: 42.3
// >
// quantile: <
// quantile: 0.9
// value: 323.3
// >
// >
}
func ExampleHistogram() {
temps := prometheus.NewHistogram(prometheus.HistogramOpts{
Name: "pond_temperature_celsius",
Help: "The temperature of the frog pond.", // Sorry, we can't measure how badly it smells.
Buckets: prometheus.LinearBuckets(20, 5, 5), // 5 buckets, each 5 centigrade wide.
})
// Simulate some observations.
for i := 0; i < 1000; i++ {
temps.Observe(30 + math.Floor(120*math.Sin(float64(i)*0.1))/10)
}
// Just for demonstration, let's check the state of the histogram by
// (ab)using its Write method (which is usually only used by Prometheus
// internally).
metric := &dto.Metric{}
temps.Write(metric)
fmt.Println(proto.MarshalTextString(metric))
// Output:
// histogram: <
// sample_count: 1000
// sample_sum: 29969.50000000001
// bucket: <
// cumulative_count: 192
// upper_bound: 20
// >
// bucket: <
// cumulative_count: 366
// upper_bound: 25
// >
// bucket: <
// cumulative_count: 501
// upper_bound: 30
// >
// bucket: <
// cumulative_count: 638
// upper_bound: 35
// >
// bucket: <
// cumulative_count: 816
// upper_bound: 40
// >
// >
}
func ExampleNewConstHistogram() {
desc := prometheus.NewDesc(
"http_request_duration_seconds",
"A histogram of the HTTP request durations.",
[]string{"code", "method"},
prometheus.Labels{"owner": "example"},
)
// Create a constant histogram from values we got from a 3rd party telemetry system.
h := prometheus.MustNewConstHistogram(
desc,
4711, 403.34,
map[float64]uint64{25: 121, 50: 2403, 100: 3221, 200: 4233},
"200", "get",
)
// Just for demonstration, let's check the state of the histogram by
// (ab)using its Write method (which is usually only used by Prometheus
// internally).
metric := &dto.Metric{}
h.Write(metric)
fmt.Println(proto.MarshalTextString(metric))
// Output:
// label: <
// name: "code"
// value: "200"
// >
// label: <
// name: "method"
// value: "get"
// >
// label: <
// name: "owner"
// value: "example"
// >
// histogram: <
// sample_count: 4711
// sample_sum: 403.34
// bucket: <
// cumulative_count: 121
// upper_bound: 25
// >
// bucket: <
// cumulative_count: 2403
// upper_bound: 50
// >
// bucket: <
// cumulative_count: 3221
// upper_bound: 100
// >
// bucket: <
// cumulative_count: 4233
// upper_bound: 200
// >
// >
}
func ExampleAlreadyRegisteredError() {
reqCounter := prometheus.NewCounter(prometheus.CounterOpts{
Name: "requests_total",
Help: "The total number of requests served.",
})
if err := prometheus.Register(reqCounter); err != nil {
if are, ok := err.(prometheus.AlreadyRegisteredError); ok {
// A counter for that metric has been registered before.
// Use the old counter from now on.
reqCounter = are.ExistingCollector.(prometheus.Counter)
} else {
// Something else went wrong!
panic(err)
}
}
reqCounter.Inc()
}
func ExampleGatherers() {
reg := prometheus.NewRegistry()
temp := prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: "temperature_kelvin",
Help: "Temperature in Kelvin.",
},
[]string{"location"},
)
reg.MustRegister(temp)
temp.WithLabelValues("outside").Set(273.14)
temp.WithLabelValues("inside").Set(298.44)
var parser expfmt.TextParser
text := `
# TYPE humidity_percent gauge
# HELP humidity_percent Humidity in %.
humidity_percent{location="outside"} 45.4
humidity_percent{location="inside"} 33.2
# TYPE temperature_kelvin gauge
# HELP temperature_kelvin Temperature in Kelvin.
temperature_kelvin{location="somewhere else"} 4.5
`
parseText := func() ([]*dto.MetricFamily, error) {
parsed, err := parser.TextToMetricFamilies(strings.NewReader(text))
if err != nil {
return nil, err
}
var result []*dto.MetricFamily
for _, mf := range parsed {
result = append(result, mf)
}
return result, nil
}
gatherers := prometheus.Gatherers{
reg,
prometheus.GathererFunc(parseText),
}
gathering, err := gatherers.Gather()
if err != nil {
fmt.Println(err)
}
out := &bytes.Buffer{}
for _, mf := range gathering {
if _, err := expfmt.MetricFamilyToText(out, mf); err != nil {
panic(err)
}
}
fmt.Print(out.String())
fmt.Println("----------")
// Note how the temperature_kelvin metric family has been merged from
// different sources. Now try
text = `
# TYPE humidity_percent gauge
# HELP humidity_percent Humidity in %.
humidity_percent{location="outside"} 45.4
humidity_percent{location="inside"} 33.2
# TYPE temperature_kelvin gauge
# HELP temperature_kelvin Temperature in Kelvin.
# Duplicate metric:
temperature_kelvin{location="outside"} 265.3
# Wrong labels:
temperature_kelvin 4.5
`
gathering, err = gatherers.Gather()
if err != nil {
fmt.Println(err)
}
// Note that still as many metrics as possible are returned:
out.Reset()
for _, mf := range gathering {
if _, err := expfmt.MetricFamilyToText(out, mf); err != nil {
panic(err)
}
}
fmt.Print(out.String())
// Output:
// # HELP humidity_percent Humidity in %.
// # TYPE humidity_percent gauge
// humidity_percent{location="inside"} 33.2
// humidity_percent{location="outside"} 45.4
// # HELP temperature_kelvin Temperature in Kelvin.
// # TYPE temperature_kelvin gauge
// temperature_kelvin{location="inside"} 298.44
// temperature_kelvin{location="outside"} 273.14
// temperature_kelvin{location="somewhere else"} 4.5
// ----------
// 2 error(s) occurred:
// * collected metric temperature_kelvin label:<name:"location" value:"outside" > gauge:<value:265.3 > was collected before with the same name and label values
// * collected metric temperature_kelvin gauge:<value:4.5 > has label dimensions inconsistent with previously collected metrics in the same metric family
// # HELP humidity_percent Humidity in %.
// # TYPE humidity_percent gauge
// humidity_percent{location="inside"} 33.2
// humidity_percent{location="outside"} 45.4
// # HELP temperature_kelvin Temperature in Kelvin.
// # TYPE temperature_kelvin gauge
// temperature_kelvin{location="inside"} 298.44
// temperature_kelvin{location="outside"} 273.14
}

View File

@@ -1,97 +0,0 @@
// Copyright 2014 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus_test
import (
"expvar"
"fmt"
"sort"
"strings"
dto "github.com/prometheus/client_model/go"
"github.com/prometheus/client_golang/prometheus"
)
func ExampleNewExpvarCollector() {
expvarCollector := prometheus.NewExpvarCollector(map[string]*prometheus.Desc{
"memstats": prometheus.NewDesc(
"expvar_memstats",
"All numeric memstats as one metric family. Not a good role-model, actually... ;-)",
[]string{"type"}, nil,
),
"lone-int": prometheus.NewDesc(
"expvar_lone_int",
"Just an expvar int as an example.",
nil, nil,
),
"http-request-map": prometheus.NewDesc(
"expvar_http_request_total",
"How many http requests processed, partitioned by status code and http method.",
[]string{"code", "method"}, nil,
),
})
prometheus.MustRegister(expvarCollector)
// The Prometheus part is done here. But to show that this example is
// doing anything, we have to manually export something via expvar. In
// real-life use-cases, some library would already have exported via
// expvar what we want to re-export as Prometheus metrics.
expvar.NewInt("lone-int").Set(42)
expvarMap := expvar.NewMap("http-request-map")
var (
expvarMap1, expvarMap2 expvar.Map
expvarInt11, expvarInt12, expvarInt21, expvarInt22 expvar.Int
)
expvarMap1.Init()
expvarMap2.Init()
expvarInt11.Set(3)
expvarInt12.Set(13)
expvarInt21.Set(11)
expvarInt22.Set(212)
expvarMap1.Set("POST", &expvarInt11)
expvarMap1.Set("GET", &expvarInt12)
expvarMap2.Set("POST", &expvarInt21)
expvarMap2.Set("GET", &expvarInt22)
expvarMap.Set("404", &expvarMap1)
expvarMap.Set("200", &expvarMap2)
// Results in the following expvar map:
// "http-request-count": {"200": {"POST": 11, "GET": 212}, "404": {"POST": 3, "GET": 13}}
// Let's see what the scrape would yield, but exclude the memstats metrics.
metricStrings := []string{}
metric := dto.Metric{}
metricChan := make(chan prometheus.Metric)
go func() {
expvarCollector.Collect(metricChan)
close(metricChan)
}()
for m := range metricChan {
if strings.Index(m.Desc().String(), "expvar_memstats") == -1 {
metric.Reset()
m.Write(&metric)
metricStrings = append(metricStrings, metric.String())
}
}
sort.Strings(metricStrings)
for _, s := range metricStrings {
fmt.Println(strings.TrimRight(s, " "))
}
// Output:
// label:<name:"code" value:"200" > label:<name:"method" value:"GET" > untyped:<value:212 >
// label:<name:"code" value:"200" > label:<name:"method" value:"POST" > untyped:<value:11 >
// label:<name:"code" value:"404" > label:<name:"method" value:"GET" > untyped:<value:13 >
// label:<name:"code" value:"404" > label:<name:"method" value:"POST" > untyped:<value:3 >
// untyped:<value:42 >
}

View File

@@ -1,202 +0,0 @@
// Copyright 2014 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus
import (
"math"
"math/rand"
"sync"
"testing"
"testing/quick"
"time"
dto "github.com/prometheus/client_model/go"
)
func listenGaugeStream(vals, result chan float64, done chan struct{}) {
var sum float64
outer:
for {
select {
case <-done:
close(vals)
for v := range vals {
sum += v
}
break outer
case v := <-vals:
sum += v
}
}
result <- sum
close(result)
}
func TestGaugeConcurrency(t *testing.T) {
it := func(n uint32) bool {
mutations := int(n % 10000)
concLevel := int(n%15 + 1)
var start, end sync.WaitGroup
start.Add(1)
end.Add(concLevel)
sStream := make(chan float64, mutations*concLevel)
result := make(chan float64)
done := make(chan struct{})
go listenGaugeStream(sStream, result, done)
go func() {
end.Wait()
close(done)
}()
gge := NewGauge(GaugeOpts{
Name: "test_gauge",
Help: "no help can be found here",
})
for i := 0; i < concLevel; i++ {
vals := make([]float64, mutations)
for j := 0; j < mutations; j++ {
vals[j] = rand.Float64() - 0.5
}
go func(vals []float64) {
start.Wait()
for _, v := range vals {
sStream <- v
gge.Add(v)
}
end.Done()
}(vals)
}
start.Done()
if expected, got := <-result, math.Float64frombits(gge.(*value).valBits); math.Abs(expected-got) > 0.000001 {
t.Fatalf("expected approx. %f, got %f", expected, got)
return false
}
return true
}
if err := quick.Check(it, nil); err != nil {
t.Fatal(err)
}
}
func TestGaugeVecConcurrency(t *testing.T) {
it := func(n uint32) bool {
mutations := int(n % 10000)
concLevel := int(n%15 + 1)
vecLength := int(n%5 + 1)
var start, end sync.WaitGroup
start.Add(1)
end.Add(concLevel)
sStreams := make([]chan float64, vecLength)
results := make([]chan float64, vecLength)
done := make(chan struct{})
for i := 0; i < vecLength; i++ {
sStreams[i] = make(chan float64, mutations*concLevel)
results[i] = make(chan float64)
go listenGaugeStream(sStreams[i], results[i], done)
}
go func() {
end.Wait()
close(done)
}()
gge := NewGaugeVec(
GaugeOpts{
Name: "test_gauge",
Help: "no help can be found here",
},
[]string{"label"},
)
for i := 0; i < concLevel; i++ {
vals := make([]float64, mutations)
pick := make([]int, mutations)
for j := 0; j < mutations; j++ {
vals[j] = rand.Float64() - 0.5
pick[j] = rand.Intn(vecLength)
}
go func(vals []float64) {
start.Wait()
for i, v := range vals {
sStreams[pick[i]] <- v
gge.WithLabelValues(string('A' + pick[i])).Add(v)
}
end.Done()
}(vals)
}
start.Done()
for i := range sStreams {
if expected, got := <-results[i], math.Float64frombits(gge.WithLabelValues(string('A'+i)).(*value).valBits); math.Abs(expected-got) > 0.000001 {
t.Fatalf("expected approx. %f, got %f", expected, got)
return false
}
}
return true
}
if err := quick.Check(it, nil); err != nil {
t.Fatal(err)
}
}
func TestGaugeFunc(t *testing.T) {
gf := NewGaugeFunc(
GaugeOpts{
Name: "test_name",
Help: "test help",
ConstLabels: Labels{"a": "1", "b": "2"},
},
func() float64 { return 3.1415 },
)
if expected, got := `Desc{fqName: "test_name", help: "test help", constLabels: {a="1",b="2"}, variableLabels: []}`, gf.Desc().String(); expected != got {
t.Errorf("expected %q, got %q", expected, got)
}
m := &dto.Metric{}
gf.Write(m)
if expected, got := `label:<name:"a" value:"1" > label:<name:"b" value:"2" > gauge:<value:3.1415 > `, m.String(); expected != got {
t.Errorf("expected %q, got %q", expected, got)
}
}
func TestGaugeSetCurrentTime(t *testing.T) {
g := NewGauge(GaugeOpts{
Name: "test_name",
Help: "test help",
})
g.SetToCurrentTime()
unixTime := float64(time.Now().Unix())
m := &dto.Metric{}
g.Write(m)
delta := unixTime - m.GetGauge().GetValue()
// This is just a smoke test to make sure SetToCurrentTime is not
// totally off. Tests with current time involved are hard...
if math.Abs(delta) > 5 {
t.Errorf("Gauge set to current time deviates from current time by more than 5s, delta is %f seconds", delta)
}
}

View File

@@ -1,127 +0,0 @@
package prometheus
import (
"runtime"
"testing"
"time"
dto "github.com/prometheus/client_model/go"
)
func TestGoCollector(t *testing.T) {
var (
c = NewGoCollector()
ch = make(chan Metric)
waitc = make(chan struct{})
closec = make(chan struct{})
old = -1
)
defer close(closec)
go func() {
c.Collect(ch)
go func(c <-chan struct{}) {
<-c
}(closec)
<-waitc
c.Collect(ch)
}()
for {
select {
case m := <-ch:
// m can be Gauge or Counter,
// currently just test the go_goroutines Gauge
// and ignore others.
if m.Desc().fqName != "go_goroutines" {
continue
}
pb := &dto.Metric{}
m.Write(pb)
if pb.GetGauge() == nil {
continue
}
if old == -1 {
old = int(pb.GetGauge().GetValue())
close(waitc)
continue
}
if diff := int(pb.GetGauge().GetValue()) - old; diff != 1 {
// TODO: This is flaky in highly concurrent situations.
t.Errorf("want 1 new goroutine, got %d", diff)
}
// GoCollector performs three sends per call.
// On line 27 we need to receive three more sends
// to shut down cleanly.
<-ch
<-ch
<-ch
return
case <-time.After(1 * time.Second):
t.Fatalf("expected collect timed out")
}
}
}
func TestGCCollector(t *testing.T) {
var (
c = NewGoCollector()
ch = make(chan Metric)
waitc = make(chan struct{})
closec = make(chan struct{})
oldGC uint64
oldPause float64
)
defer close(closec)
go func() {
c.Collect(ch)
// force GC
runtime.GC()
<-waitc
c.Collect(ch)
}()
first := true
for {
select {
case metric := <-ch:
switch m := metric.(type) {
case *constSummary, *value:
pb := &dto.Metric{}
m.Write(pb)
if pb.GetSummary() == nil {
continue
}
if len(pb.GetSummary().Quantile) != 5 {
t.Errorf("expected 4 buckets, got %d", len(pb.GetSummary().Quantile))
}
for idx, want := range []float64{0.0, 0.25, 0.5, 0.75, 1.0} {
if *pb.GetSummary().Quantile[idx].Quantile != want {
t.Errorf("bucket #%d is off, got %f, want %f", idx, *pb.GetSummary().Quantile[idx].Quantile, want)
}
}
if first {
first = false
oldGC = *pb.GetSummary().SampleCount
oldPause = *pb.GetSummary().SampleSum
close(waitc)
continue
}
if diff := *pb.GetSummary().SampleCount - oldGC; diff != 1 {
t.Errorf("want 1 new garbage collection run, got %d", diff)
}
if diff := *pb.GetSummary().SampleSum - oldPause; diff <= 0 {
t.Errorf("want moar pause, got %f", diff)
}
return
}
case <-time.After(1 * time.Second):
t.Fatalf("expected collect timed out")
}
}
}

View File

@@ -1,280 +0,0 @@
// Copyright 2016 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package graphite provides a bridge to push Prometheus metrics to a Graphite
// server.
package graphite
import (
"bufio"
"errors"
"fmt"
"io"
"net"
"sort"
"time"
"github.com/prometheus/common/expfmt"
"github.com/prometheus/common/model"
"golang.org/x/net/context"
dto "github.com/prometheus/client_model/go"
"github.com/prometheus/client_golang/prometheus"
)
const (
defaultInterval = 15 * time.Second
millisecondsPerSecond = 1000
)
// HandlerErrorHandling defines how a Handler serving metrics will handle
// errors.
type HandlerErrorHandling int
// These constants cause handlers serving metrics to behave as described if
// errors are encountered.
const (
// Ignore errors and try to push as many metrics to Graphite as possible.
ContinueOnError HandlerErrorHandling = iota
// Abort the push to Graphite upon the first error encountered.
AbortOnError
)
// Config defines the Graphite bridge config.
type Config struct {
// The url to push data to. Required.
URL string
// The prefix for the pushed Graphite metrics. Defaults to empty string.
Prefix string
// The interval to use for pushing data to Graphite. Defaults to 15 seconds.
Interval time.Duration
// The timeout for pushing metrics to Graphite. Defaults to 15 seconds.
Timeout time.Duration
// The Gatherer to use for metrics. Defaults to prometheus.DefaultGatherer.
Gatherer prometheus.Gatherer
// The logger that messages are written to. Defaults to no logging.
Logger Logger
// ErrorHandling defines how errors are handled. Note that errors are
// logged regardless of the configured ErrorHandling provided Logger
// is not nil.
ErrorHandling HandlerErrorHandling
}
// Bridge pushes metrics to the configured Graphite server.
type Bridge struct {
url string
prefix string
interval time.Duration
timeout time.Duration
errorHandling HandlerErrorHandling
logger Logger
g prometheus.Gatherer
}
// Logger is the minimal interface Bridge needs for logging. Note that
// log.Logger from the standard library implements this interface, and it is
// easy to implement by custom loggers, if they don't do so already anyway.
type Logger interface {
Println(v ...interface{})
}
// NewBridge returns a pointer to a new Bridge struct.
func NewBridge(c *Config) (*Bridge, error) {
b := &Bridge{}
if c.URL == "" {
return nil, errors.New("missing URL")
}
b.url = c.URL
if c.Gatherer == nil {
b.g = prometheus.DefaultGatherer
} else {
b.g = c.Gatherer
}
if c.Logger != nil {
b.logger = c.Logger
}
if c.Prefix != "" {
b.prefix = c.Prefix
}
var z time.Duration
if c.Interval == z {
b.interval = defaultInterval
} else {
b.interval = c.Interval
}
if c.Timeout == z {
b.timeout = defaultInterval
} else {
b.timeout = c.Timeout
}
b.errorHandling = c.ErrorHandling
return b, nil
}
// Run starts the event loop that pushes Prometheus metrics to Graphite at the
// configured interval.
func (b *Bridge) Run(ctx context.Context) {
ticker := time.NewTicker(b.interval)
defer ticker.Stop()
for {
select {
case <-ticker.C:
if err := b.Push(); err != nil && b.logger != nil {
b.logger.Println("error pushing to Graphite:", err)
}
case <-ctx.Done():
return
}
}
}
// Push pushes Prometheus metrics to the configured Graphite server.
func (b *Bridge) Push() error {
mfs, err := b.g.Gather()
if err != nil || len(mfs) == 0 {
switch b.errorHandling {
case AbortOnError:
return err
case ContinueOnError:
if b.logger != nil {
b.logger.Println("continue on error:", err)
}
default:
panic("unrecognized error handling value")
}
}
conn, err := net.DialTimeout("tcp", b.url, b.timeout)
if err != nil {
return err
}
defer conn.Close()
return writeMetrics(conn, mfs, b.prefix, model.Now())
}
func writeMetrics(w io.Writer, mfs []*dto.MetricFamily, prefix string, now model.Time) error {
vec, err := expfmt.ExtractSamples(&expfmt.DecodeOptions{
Timestamp: now,
}, mfs...)
if err != nil {
return err
}
buf := bufio.NewWriter(w)
for _, s := range vec {
if err := writeSanitized(buf, prefix); err != nil {
return err
}
if err := buf.WriteByte('.'); err != nil {
return err
}
if err := writeMetric(buf, s.Metric); err != nil {
return err
}
if _, err := fmt.Fprintf(buf, " %g %d\n", s.Value, int64(s.Timestamp)/millisecondsPerSecond); err != nil {
return err
}
if err := buf.Flush(); err != nil {
return err
}
}
return nil
}
func writeMetric(buf *bufio.Writer, m model.Metric) error {
metricName, hasName := m[model.MetricNameLabel]
numLabels := len(m) - 1
if !hasName {
numLabels = len(m)
}
labelStrings := make([]string, 0, numLabels)
for label, value := range m {
if label != model.MetricNameLabel {
labelStrings = append(labelStrings, fmt.Sprintf("%s %s", string(label), string(value)))
}
}
var err error
switch numLabels {
case 0:
if hasName {
return writeSanitized(buf, string(metricName))
}
default:
sort.Strings(labelStrings)
if err = writeSanitized(buf, string(metricName)); err != nil {
return err
}
for _, s := range labelStrings {
if err = buf.WriteByte('.'); err != nil {
return err
}
if err = writeSanitized(buf, s); err != nil {
return err
}
}
}
return nil
}
func writeSanitized(buf *bufio.Writer, s string) error {
prevUnderscore := false
for _, c := range s {
c = replaceInvalidRune(c)
if c == '_' {
if prevUnderscore {
continue
}
prevUnderscore = true
} else {
prevUnderscore = false
}
if _, err := buf.WriteRune(c); err != nil {
return err
}
}
return nil
}
func replaceInvalidRune(c rune) rune {
if c == ' ' {
return '.'
}
if !((c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z') || c == '_' || c == ':' || (c >= '0' && c <= '9')) {
return '_'
}
return c
}

View File

@@ -1,309 +0,0 @@
package graphite
import (
"bufio"
"bytes"
"io"
"log"
"net"
"os"
"regexp"
"testing"
"time"
"github.com/prometheus/common/model"
"golang.org/x/net/context"
"github.com/prometheus/client_golang/prometheus"
)
func TestSanitize(t *testing.T) {
testCases := []struct {
in, out string
}{
{in: "hello", out: "hello"},
{in: "hE/l1o", out: "hE_l1o"},
{in: "he,*ll(.o", out: "he_ll_o"},
{in: "hello_there%^&", out: "hello_there_"},
}
var buf bytes.Buffer
w := bufio.NewWriter(&buf)
for i, tc := range testCases {
if err := writeSanitized(w, tc.in); err != nil {
t.Fatalf("write failed: %v", err)
}
if err := w.Flush(); err != nil {
t.Fatalf("flush failed: %v", err)
}
if want, got := tc.out, buf.String(); want != got {
t.Fatalf("test case index %d: got sanitized string %s, want %s", i, got, want)
}
buf.Reset()
}
}
func TestWriteSummary(t *testing.T) {
sumVec := prometheus.NewSummaryVec(
prometheus.SummaryOpts{
Name: "name",
Help: "docstring",
ConstLabels: prometheus.Labels{"constname": "constvalue"},
Objectives: map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001},
},
[]string{"labelname"},
)
sumVec.WithLabelValues("val1").Observe(float64(10))
sumVec.WithLabelValues("val1").Observe(float64(20))
sumVec.WithLabelValues("val1").Observe(float64(30))
sumVec.WithLabelValues("val2").Observe(float64(20))
sumVec.WithLabelValues("val2").Observe(float64(30))
sumVec.WithLabelValues("val2").Observe(float64(40))
reg := prometheus.NewRegistry()
reg.MustRegister(sumVec)
mfs, err := reg.Gather()
if err != nil {
t.Fatalf("error: %v", err)
}
now := model.Time(1477043083)
var buf bytes.Buffer
err = writeMetrics(&buf, mfs, "prefix", now)
if err != nil {
t.Fatalf("error: %v", err)
}
want := `prefix.name.constname.constvalue.labelname.val1.quantile.0_5 20 1477043
prefix.name.constname.constvalue.labelname.val1.quantile.0_9 30 1477043
prefix.name.constname.constvalue.labelname.val1.quantile.0_99 30 1477043
prefix.name_sum.constname.constvalue.labelname.val1 60 1477043
prefix.name_count.constname.constvalue.labelname.val1 3 1477043
prefix.name.constname.constvalue.labelname.val2.quantile.0_5 30 1477043
prefix.name.constname.constvalue.labelname.val2.quantile.0_9 40 1477043
prefix.name.constname.constvalue.labelname.val2.quantile.0_99 40 1477043
prefix.name_sum.constname.constvalue.labelname.val2 90 1477043
prefix.name_count.constname.constvalue.labelname.val2 3 1477043
`
if got := buf.String(); want != got {
t.Fatalf("wanted \n%s\n, got \n%s\n", want, got)
}
}
func TestWriteHistogram(t *testing.T) {
histVec := prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "name",
Help: "docstring",
ConstLabels: prometheus.Labels{"constname": "constvalue"},
Buckets: []float64{0.01, 0.02, 0.05, 0.1},
},
[]string{"labelname"},
)
histVec.WithLabelValues("val1").Observe(float64(10))
histVec.WithLabelValues("val1").Observe(float64(20))
histVec.WithLabelValues("val1").Observe(float64(30))
histVec.WithLabelValues("val2").Observe(float64(20))
histVec.WithLabelValues("val2").Observe(float64(30))
histVec.WithLabelValues("val2").Observe(float64(40))
reg := prometheus.NewRegistry()
reg.MustRegister(histVec)
mfs, err := reg.Gather()
if err != nil {
t.Fatalf("error: %v", err)
}
now := model.Time(1477043083)
var buf bytes.Buffer
err = writeMetrics(&buf, mfs, "prefix", now)
if err != nil {
t.Fatalf("error: %v", err)
}
want := `prefix.name_bucket.constname.constvalue.labelname.val1.le.0_01 0 1477043
prefix.name_bucket.constname.constvalue.labelname.val1.le.0_02 0 1477043
prefix.name_bucket.constname.constvalue.labelname.val1.le.0_05 0 1477043
prefix.name_bucket.constname.constvalue.labelname.val1.le.0_1 0 1477043
prefix.name_sum.constname.constvalue.labelname.val1 60 1477043
prefix.name_count.constname.constvalue.labelname.val1 3 1477043
prefix.name_bucket.constname.constvalue.labelname.val1.le._Inf 3 1477043
prefix.name_bucket.constname.constvalue.labelname.val2.le.0_01 0 1477043
prefix.name_bucket.constname.constvalue.labelname.val2.le.0_02 0 1477043
prefix.name_bucket.constname.constvalue.labelname.val2.le.0_05 0 1477043
prefix.name_bucket.constname.constvalue.labelname.val2.le.0_1 0 1477043
prefix.name_sum.constname.constvalue.labelname.val2 90 1477043
prefix.name_count.constname.constvalue.labelname.val2 3 1477043
prefix.name_bucket.constname.constvalue.labelname.val2.le._Inf 3 1477043
`
if got := buf.String(); want != got {
t.Fatalf("wanted \n%s\n, got \n%s\n", want, got)
}
}
func TestToReader(t *testing.T) {
cntVec := prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "name",
Help: "docstring",
ConstLabels: prometheus.Labels{"constname": "constvalue"},
},
[]string{"labelname"},
)
cntVec.WithLabelValues("val1").Inc()
cntVec.WithLabelValues("val2").Inc()
reg := prometheus.NewRegistry()
reg.MustRegister(cntVec)
want := `prefix.name.constname.constvalue.labelname.val1 1 1477043
prefix.name.constname.constvalue.labelname.val2 1 1477043
`
mfs, err := reg.Gather()
if err != nil {
t.Fatalf("error: %v", err)
}
now := model.Time(1477043083)
var buf bytes.Buffer
err = writeMetrics(&buf, mfs, "prefix", now)
if err != nil {
t.Fatalf("error: %v", err)
}
if got := buf.String(); want != got {
t.Fatalf("wanted \n%s\n, got \n%s\n", want, got)
}
}
func TestPush(t *testing.T) {
reg := prometheus.NewRegistry()
cntVec := prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "name",
Help: "docstring",
ConstLabels: prometheus.Labels{"constname": "constvalue"},
},
[]string{"labelname"},
)
cntVec.WithLabelValues("val1").Inc()
cntVec.WithLabelValues("val2").Inc()
reg.MustRegister(cntVec)
host := "localhost"
port := ":56789"
b, err := NewBridge(&Config{
URL: host + port,
Gatherer: reg,
Prefix: "prefix",
})
if err != nil {
t.Fatalf("error creating bridge: %v", err)
}
nmg, err := newMockGraphite(port)
if err != nil {
t.Fatalf("error creating mock graphite: %v", err)
}
defer nmg.Close()
err = b.Push()
if err != nil {
t.Fatalf("error pushing: %v", err)
}
wants := []string{
"prefix.name.constname.constvalue.labelname.val1 1",
"prefix.name.constname.constvalue.labelname.val2 1",
}
select {
case got := <-nmg.readc:
for _, want := range wants {
matched, err := regexp.MatchString(want, got)
if err != nil {
t.Fatalf("error pushing: %v", err)
}
if !matched {
t.Fatalf("missing metric:\nno match for %s received by server:\n%s", want, got)
}
}
return
case err := <-nmg.errc:
t.Fatalf("error reading push: %v", err)
case <-time.After(50 * time.Millisecond):
t.Fatalf("no result from graphite server")
}
}
func newMockGraphite(port string) (*mockGraphite, error) {
readc := make(chan string)
errc := make(chan error)
ln, err := net.Listen("tcp", port)
if err != nil {
return nil, err
}
go func() {
conn, err := ln.Accept()
if err != nil {
errc <- err
}
var b bytes.Buffer
io.Copy(&b, conn)
readc <- b.String()
}()
return &mockGraphite{
readc: readc,
errc: errc,
Listener: ln,
}, nil
}
type mockGraphite struct {
readc chan string
errc chan error
net.Listener
}
func ExampleBridge() {
b, err := NewBridge(&Config{
URL: "graphite.example.org:3099",
Gatherer: prometheus.DefaultGatherer,
Prefix: "prefix",
Interval: 15 * time.Second,
Timeout: 10 * time.Second,
ErrorHandling: AbortOnError,
Logger: log.New(os.Stdout, "graphite bridge: ", log.Lshortfile),
})
if err != nil {
panic(err)
}
go func() {
// Start something in a goroutine that uses metrics.
}()
// Push initial metrics to Graphite. Fail fast if the push fails.
if err := b.Push(); err != nil {
panic(err)
}
// Create a Context to control stopping the Run() loop that pushes
// metrics to Graphite.
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// Start pushing metrics to Graphite in the Run() loop.
b.Run(ctx)
}

View File

@@ -1,348 +0,0 @@
// Copyright 2015 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus
import (
"math"
"math/rand"
"reflect"
"sort"
"sync"
"testing"
"testing/quick"
dto "github.com/prometheus/client_model/go"
)
func benchmarkHistogramObserve(w int, b *testing.B) {
b.StopTimer()
wg := new(sync.WaitGroup)
wg.Add(w)
g := new(sync.WaitGroup)
g.Add(1)
s := NewHistogram(HistogramOpts{})
for i := 0; i < w; i++ {
go func() {
g.Wait()
for i := 0; i < b.N; i++ {
s.Observe(float64(i))
}
wg.Done()
}()
}
b.StartTimer()
g.Done()
wg.Wait()
}
func BenchmarkHistogramObserve1(b *testing.B) {
benchmarkHistogramObserve(1, b)
}
func BenchmarkHistogramObserve2(b *testing.B) {
benchmarkHistogramObserve(2, b)
}
func BenchmarkHistogramObserve4(b *testing.B) {
benchmarkHistogramObserve(4, b)
}
func BenchmarkHistogramObserve8(b *testing.B) {
benchmarkHistogramObserve(8, b)
}
func benchmarkHistogramWrite(w int, b *testing.B) {
b.StopTimer()
wg := new(sync.WaitGroup)
wg.Add(w)
g := new(sync.WaitGroup)
g.Add(1)
s := NewHistogram(HistogramOpts{})
for i := 0; i < 1000000; i++ {
s.Observe(float64(i))
}
for j := 0; j < w; j++ {
outs := make([]dto.Metric, b.N)
go func(o []dto.Metric) {
g.Wait()
for i := 0; i < b.N; i++ {
s.Write(&o[i])
}
wg.Done()
}(outs)
}
b.StartTimer()
g.Done()
wg.Wait()
}
func BenchmarkHistogramWrite1(b *testing.B) {
benchmarkHistogramWrite(1, b)
}
func BenchmarkHistogramWrite2(b *testing.B) {
benchmarkHistogramWrite(2, b)
}
func BenchmarkHistogramWrite4(b *testing.B) {
benchmarkHistogramWrite(4, b)
}
func BenchmarkHistogramWrite8(b *testing.B) {
benchmarkHistogramWrite(8, b)
}
func TestHistogramNonMonotonicBuckets(t *testing.T) {
testCases := map[string][]float64{
"not strictly monotonic": {1, 2, 2, 3},
"not monotonic at all": {1, 2, 4, 3, 5},
"have +Inf in the middle": {1, 2, math.Inf(+1), 3},
}
for name, buckets := range testCases {
func() {
defer func() {
if r := recover(); r == nil {
t.Errorf("Buckets %v are %s but NewHistogram did not panic.", buckets, name)
}
}()
_ = NewHistogram(HistogramOpts{
Name: "test_histogram",
Help: "helpless",
Buckets: buckets,
})
}()
}
}
// Intentionally adding +Inf here to test if that case is handled correctly.
// Also, getCumulativeCounts depends on it.
var testBuckets = []float64{-2, -1, -0.5, 0, 0.5, 1, 2, math.Inf(+1)}
func TestHistogramConcurrency(t *testing.T) {
if testing.Short() {
t.Skip("Skipping test in short mode.")
}
rand.Seed(42)
it := func(n uint32) bool {
mutations := int(n%1e4 + 1e4)
concLevel := int(n%5 + 1)
total := mutations * concLevel
var start, end sync.WaitGroup
start.Add(1)
end.Add(concLevel)
sum := NewHistogram(HistogramOpts{
Name: "test_histogram",
Help: "helpless",
Buckets: testBuckets,
})
allVars := make([]float64, total)
var sampleSum float64
for i := 0; i < concLevel; i++ {
vals := make([]float64, mutations)
for j := 0; j < mutations; j++ {
v := rand.NormFloat64()
vals[j] = v
allVars[i*mutations+j] = v
sampleSum += v
}
go func(vals []float64) {
start.Wait()
for _, v := range vals {
sum.Observe(v)
}
end.Done()
}(vals)
}
sort.Float64s(allVars)
start.Done()
end.Wait()
m := &dto.Metric{}
sum.Write(m)
if got, want := int(*m.Histogram.SampleCount), total; got != want {
t.Errorf("got sample count %d, want %d", got, want)
}
if got, want := *m.Histogram.SampleSum, sampleSum; math.Abs((got-want)/want) > 0.001 {
t.Errorf("got sample sum %f, want %f", got, want)
}
wantCounts := getCumulativeCounts(allVars)
if got, want := len(m.Histogram.Bucket), len(testBuckets)-1; got != want {
t.Errorf("got %d buckets in protobuf, want %d", got, want)
}
for i, wantBound := range testBuckets {
if i == len(testBuckets)-1 {
break // No +Inf bucket in protobuf.
}
if gotBound := *m.Histogram.Bucket[i].UpperBound; gotBound != wantBound {
t.Errorf("got bound %f, want %f", gotBound, wantBound)
}
if gotCount, wantCount := *m.Histogram.Bucket[i].CumulativeCount, wantCounts[i]; gotCount != wantCount {
t.Errorf("got count %d, want %d", gotCount, wantCount)
}
}
return true
}
if err := quick.Check(it, nil); err != nil {
t.Error(err)
}
}
func TestHistogramVecConcurrency(t *testing.T) {
if testing.Short() {
t.Skip("Skipping test in short mode.")
}
rand.Seed(42)
objectives := make([]float64, 0, len(DefObjectives))
for qu := range DefObjectives {
objectives = append(objectives, qu)
}
sort.Float64s(objectives)
it := func(n uint32) bool {
mutations := int(n%1e4 + 1e4)
concLevel := int(n%7 + 1)
vecLength := int(n%3 + 1)
var start, end sync.WaitGroup
start.Add(1)
end.Add(concLevel)
his := NewHistogramVec(
HistogramOpts{
Name: "test_histogram",
Help: "helpless",
Buckets: []float64{-2, -1, -0.5, 0, 0.5, 1, 2, math.Inf(+1)},
},
[]string{"label"},
)
allVars := make([][]float64, vecLength)
sampleSums := make([]float64, vecLength)
for i := 0; i < concLevel; i++ {
vals := make([]float64, mutations)
picks := make([]int, mutations)
for j := 0; j < mutations; j++ {
v := rand.NormFloat64()
vals[j] = v
pick := rand.Intn(vecLength)
picks[j] = pick
allVars[pick] = append(allVars[pick], v)
sampleSums[pick] += v
}
go func(vals []float64) {
start.Wait()
for i, v := range vals {
his.WithLabelValues(string('A' + picks[i])).Observe(v)
}
end.Done()
}(vals)
}
for _, vars := range allVars {
sort.Float64s(vars)
}
start.Done()
end.Wait()
for i := 0; i < vecLength; i++ {
m := &dto.Metric{}
s := his.WithLabelValues(string('A' + i))
s.(Histogram).Write(m)
if got, want := len(m.Histogram.Bucket), len(testBuckets)-1; got != want {
t.Errorf("got %d buckets in protobuf, want %d", got, want)
}
if got, want := int(*m.Histogram.SampleCount), len(allVars[i]); got != want {
t.Errorf("got sample count %d, want %d", got, want)
}
if got, want := *m.Histogram.SampleSum, sampleSums[i]; math.Abs((got-want)/want) > 0.001 {
t.Errorf("got sample sum %f, want %f", got, want)
}
wantCounts := getCumulativeCounts(allVars[i])
for j, wantBound := range testBuckets {
if j == len(testBuckets)-1 {
break // No +Inf bucket in protobuf.
}
if gotBound := *m.Histogram.Bucket[j].UpperBound; gotBound != wantBound {
t.Errorf("got bound %f, want %f", gotBound, wantBound)
}
if gotCount, wantCount := *m.Histogram.Bucket[j].CumulativeCount, wantCounts[j]; gotCount != wantCount {
t.Errorf("got count %d, want %d", gotCount, wantCount)
}
}
}
return true
}
if err := quick.Check(it, nil); err != nil {
t.Error(err)
}
}
func getCumulativeCounts(vars []float64) []uint64 {
counts := make([]uint64, len(testBuckets))
for _, v := range vars {
for i := len(testBuckets) - 1; i >= 0; i-- {
if v > testBuckets[i] {
break
}
counts[i]++
}
}
return counts
}
func TestBuckets(t *testing.T) {
got := LinearBuckets(-15, 5, 6)
want := []float64{-15, -10, -5, 0, 5, 10}
if !reflect.DeepEqual(got, want) {
t.Errorf("linear buckets: got %v, want %v", got, want)
}
got = ExponentialBuckets(100, 1.2, 3)
want = []float64{100, 120, 144}
if !reflect.DeepEqual(got, want) {
t.Errorf("linear buckets: got %v, want %v", got, want)
}
}

View File

@@ -1,154 +0,0 @@
// Copyright 2014 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus
import (
"net/http"
"net/http/httptest"
"testing"
"time"
dto "github.com/prometheus/client_model/go"
)
type respBody string
func (b respBody) ServeHTTP(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusTeapot)
w.Write([]byte(b))
}
func TestInstrumentHandler(t *testing.T) {
defer func(n nower) {
now = n.(nower)
}(now)
instant := time.Now()
end := instant.Add(30 * time.Second)
now = nowSeries(instant, end)
respBody := respBody("Howdy there!")
hndlr := InstrumentHandler("test-handler", respBody)
opts := SummaryOpts{
Subsystem: "http",
ConstLabels: Labels{"handler": "test-handler"},
Objectives: map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001},
}
reqCnt := NewCounterVec(
CounterOpts{
Namespace: opts.Namespace,
Subsystem: opts.Subsystem,
Name: "requests_total",
Help: "Total number of HTTP requests made.",
ConstLabels: opts.ConstLabels,
},
instLabels,
)
err := Register(reqCnt)
if err == nil {
t.Fatal("expected reqCnt to be registered already")
}
if are, ok := err.(AlreadyRegisteredError); ok {
reqCnt = are.ExistingCollector.(*CounterVec)
} else {
t.Fatal("unexpected registration error:", err)
}
opts.Name = "request_duration_microseconds"
opts.Help = "The HTTP request latencies in microseconds."
reqDur := NewSummary(opts)
err = Register(reqDur)
if err == nil {
t.Fatal("expected reqDur to be registered already")
}
if are, ok := err.(AlreadyRegisteredError); ok {
reqDur = are.ExistingCollector.(Summary)
} else {
t.Fatal("unexpected registration error:", err)
}
opts.Name = "request_size_bytes"
opts.Help = "The HTTP request sizes in bytes."
reqSz := NewSummary(opts)
err = Register(reqSz)
if err == nil {
t.Fatal("expected reqSz to be registered already")
}
if _, ok := err.(AlreadyRegisteredError); !ok {
t.Fatal("unexpected registration error:", err)
}
opts.Name = "response_size_bytes"
opts.Help = "The HTTP response sizes in bytes."
resSz := NewSummary(opts)
err = Register(resSz)
if err == nil {
t.Fatal("expected resSz to be registered already")
}
if _, ok := err.(AlreadyRegisteredError); !ok {
t.Fatal("unexpected registration error:", err)
}
reqCnt.Reset()
resp := httptest.NewRecorder()
req := &http.Request{
Method: "GET",
}
hndlr.ServeHTTP(resp, req)
if resp.Code != http.StatusTeapot {
t.Fatalf("expected status %d, got %d", http.StatusTeapot, resp.Code)
}
if string(resp.Body.Bytes()) != "Howdy there!" {
t.Fatalf("expected body %s, got %s", "Howdy there!", string(resp.Body.Bytes()))
}
out := &dto.Metric{}
reqDur.Write(out)
if want, got := "test-handler", out.Label[0].GetValue(); want != got {
t.Errorf("want label value %q in reqDur, got %q", want, got)
}
if want, got := uint64(1), out.Summary.GetSampleCount(); want != got {
t.Errorf("want sample count %d in reqDur, got %d", want, got)
}
out.Reset()
if want, got := 1, len(reqCnt.children); want != got {
t.Errorf("want %d children in reqCnt, got %d", want, got)
}
cnt, err := reqCnt.GetMetricWithLabelValues("get", "418")
if err != nil {
t.Fatal(err)
}
cnt.Write(out)
if want, got := "418", out.Label[0].GetValue(); want != got {
t.Errorf("want label value %q in reqCnt, got %q", want, got)
}
if want, got := "test-handler", out.Label[1].GetValue(); want != got {
t.Errorf("want label value %q in reqCnt, got %q", want, got)
}
if want, got := "get", out.Label[2].GetValue(); want != got {
t.Errorf("want label value %q in reqCnt, got %q", want, got)
}
if out.Counter == nil {
t.Fatal("expected non-nil counter in reqCnt")
}
if want, got := 1., out.Counter.GetValue(); want != got {
t.Errorf("want reqCnt of %f, got %f", want, got)
}
}

View File

@@ -1,35 +0,0 @@
// Copyright 2014 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus
import "testing"
func TestBuildFQName(t *testing.T) {
scenarios := []struct{ namespace, subsystem, name, result string }{
{"a", "b", "c", "a_b_c"},
{"", "b", "c", "b_c"},
{"a", "", "c", "a_c"},
{"", "", "c", "c"},
{"a", "b", "", ""},
{"a", "", "", ""},
{"", "b", "", ""},
{" ", "", "", ""},
}
for i, s := range scenarios {
if want, got := s.result, BuildFQName(s.namespace, s.subsystem, s.name); want != got {
t.Errorf("%d. want %s, got %s", i, want, got)
}
}
}

View File

@@ -1,58 +0,0 @@
package prometheus
import (
"bytes"
"os"
"regexp"
"testing"
"github.com/prometheus/common/expfmt"
"github.com/prometheus/procfs"
)
func TestProcessCollector(t *testing.T) {
if _, err := procfs.Self(); err != nil {
t.Skipf("skipping TestProcessCollector, procfs not available: %s", err)
}
registry := NewRegistry()
if err := registry.Register(NewProcessCollector(os.Getpid(), "")); err != nil {
t.Fatal(err)
}
if err := registry.Register(NewProcessCollectorPIDFn(
func() (int, error) { return os.Getpid(), nil }, "foobar"),
); err != nil {
t.Fatal(err)
}
mfs, err := registry.Gather()
if err != nil {
t.Fatal(err)
}
var buf bytes.Buffer
for _, mf := range mfs {
if _, err := expfmt.MetricFamilyToText(&buf, mf); err != nil {
t.Fatal(err)
}
}
for _, re := range []*regexp.Regexp{
regexp.MustCompile("\nprocess_cpu_seconds_total [0-9]"),
regexp.MustCompile("\nprocess_max_fds [1-9]"),
regexp.MustCompile("\nprocess_open_fds [1-9]"),
regexp.MustCompile("\nprocess_virtual_memory_bytes [1-9]"),
regexp.MustCompile("\nprocess_resident_memory_bytes [1-9]"),
regexp.MustCompile("\nprocess_start_time_seconds [0-9.]{10,}"),
regexp.MustCompile("\nfoobar_process_cpu_seconds_total [0-9]"),
regexp.MustCompile("\nfoobar_process_max_fds [1-9]"),
regexp.MustCompile("\nfoobar_process_open_fds [1-9]"),
regexp.MustCompile("\nfoobar_process_virtual_memory_bytes [1-9]"),
regexp.MustCompile("\nfoobar_process_resident_memory_bytes [1-9]"),
regexp.MustCompile("\nfoobar_process_start_time_seconds [0-9.]{10,}"),
} {
if !re.Match(buf.Bytes()) {
t.Errorf("want body to match %s\n%s", re, buf.String())
}
}
}

View File

@@ -1,131 +0,0 @@
// Copyright 2016 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package promhttp
import (
"bytes"
"errors"
"log"
"net/http"
"net/http/httptest"
"testing"
"github.com/prometheus/client_golang/prometheus"
)
type errorCollector struct{}
func (e errorCollector) Describe(ch chan<- *prometheus.Desc) {
ch <- prometheus.NewDesc("invalid_metric", "not helpful", nil, nil)
}
func (e errorCollector) Collect(ch chan<- prometheus.Metric) {
ch <- prometheus.NewInvalidMetric(
prometheus.NewDesc("invalid_metric", "not helpful", nil, nil),
errors.New("collect error"),
)
}
func TestHandlerErrorHandling(t *testing.T) {
// Create a registry that collects a MetricFamily with two elements,
// another with one, and reports an error.
reg := prometheus.NewRegistry()
cnt := prometheus.NewCounter(prometheus.CounterOpts{
Name: "the_count",
Help: "Ah-ah-ah! Thunder and lightning!",
})
reg.MustRegister(cnt)
cntVec := prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "name",
Help: "docstring",
ConstLabels: prometheus.Labels{"constname": "constvalue"},
},
[]string{"labelname"},
)
cntVec.WithLabelValues("val1").Inc()
cntVec.WithLabelValues("val2").Inc()
reg.MustRegister(cntVec)
reg.MustRegister(errorCollector{})
logBuf := &bytes.Buffer{}
logger := log.New(logBuf, "", 0)
writer := httptest.NewRecorder()
request, _ := http.NewRequest("GET", "/", nil)
request.Header.Add("Accept", "test/plain")
errorHandler := HandlerFor(reg, HandlerOpts{
ErrorLog: logger,
ErrorHandling: HTTPErrorOnError,
})
continueHandler := HandlerFor(reg, HandlerOpts{
ErrorLog: logger,
ErrorHandling: ContinueOnError,
})
panicHandler := HandlerFor(reg, HandlerOpts{
ErrorLog: logger,
ErrorHandling: PanicOnError,
})
wantMsg := `error gathering metrics: error collecting metric Desc{fqName: "invalid_metric", help: "not helpful", constLabels: {}, variableLabels: []}: collect error
`
wantErrorBody := `An error has occurred during metrics gathering:
error collecting metric Desc{fqName: "invalid_metric", help: "not helpful", constLabels: {}, variableLabels: []}: collect error
`
wantOKBody := `# HELP name docstring
# TYPE name counter
name{constname="constvalue",labelname="val1"} 1
name{constname="constvalue",labelname="val2"} 1
# HELP the_count Ah-ah-ah! Thunder and lightning!
# TYPE the_count counter
the_count 0
`
errorHandler.ServeHTTP(writer, request)
if got, want := writer.Code, http.StatusInternalServerError; got != want {
t.Errorf("got HTTP status code %d, want %d", got, want)
}
if got := logBuf.String(); got != wantMsg {
t.Errorf("got log message:\n%s\nwant log mesage:\n%s\n", got, wantMsg)
}
if got := writer.Body.String(); got != wantErrorBody {
t.Errorf("got body:\n%s\nwant body:\n%s\n", got, wantErrorBody)
}
logBuf.Reset()
writer.Body.Reset()
writer.Code = http.StatusOK
continueHandler.ServeHTTP(writer, request)
if got, want := writer.Code, http.StatusOK; got != want {
t.Errorf("got HTTP status code %d, want %d", got, want)
}
if got := logBuf.String(); got != wantMsg {
t.Errorf("got log message %q, want %q", got, wantMsg)
}
if got := writer.Body.String(); got != wantOKBody {
t.Errorf("got body %q, want %q", got, wantOKBody)
}
defer func() {
if err := recover(); err == nil {
t.Error("expected panic from panicHandler")
}
}()
panicHandler.ServeHTTP(writer, request)
}

View File

@@ -1,195 +0,0 @@
// Copyright 2017 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// +build go1.8
package promhttp
import (
"log"
"net/http"
"testing"
"time"
"github.com/prometheus/client_golang/prometheus"
)
func TestClientMiddlewareAPI(t *testing.T) {
client := http.DefaultClient
client.Timeout = 1 * time.Second
reg := prometheus.NewRegistry()
inFlightGauge := prometheus.NewGauge(prometheus.GaugeOpts{
Name: "client_in_flight_requests",
Help: "A gauge of in-flight requests for the wrapped client.",
})
counter := prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "client_api_requests_total",
Help: "A counter for requests from the wrapped client.",
},
[]string{"code", "method"},
)
dnsLatencyVec := prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "dns_duration_seconds",
Help: "Trace dns latency histogram.",
Buckets: []float64{.005, .01, .025, .05},
},
[]string{"event"},
)
tlsLatencyVec := prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "tls_duration_seconds",
Help: "Trace tls latency histogram.",
Buckets: []float64{.05, .1, .25, .5},
},
[]string{"event"},
)
histVec := prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "request_duration_seconds",
Help: "A histogram of request latencies.",
Buckets: prometheus.DefBuckets,
},
[]string{"method"},
)
reg.MustRegister(counter, tlsLatencyVec, dnsLatencyVec, histVec, inFlightGauge)
trace := &InstrumentTrace{
DNSStart: func(t float64) {
dnsLatencyVec.WithLabelValues("dns_start")
},
DNSDone: func(t float64) {
dnsLatencyVec.WithLabelValues("dns_done")
},
TLSHandshakeStart: func(t float64) {
tlsLatencyVec.WithLabelValues("tls_handshake_start")
},
TLSHandshakeDone: func(t float64) {
tlsLatencyVec.WithLabelValues("tls_handshake_done")
},
}
client.Transport = InstrumentRoundTripperInFlight(inFlightGauge,
InstrumentRoundTripperCounter(counter,
InstrumentRoundTripperTrace(trace,
InstrumentRoundTripperDuration(histVec, http.DefaultTransport),
),
),
)
resp, err := client.Get("http://google.com")
if err != nil {
t.Fatalf("%v", err)
}
defer resp.Body.Close()
}
func ExampleInstrumentRoundTripperDuration() {
client := http.DefaultClient
client.Timeout = 1 * time.Second
inFlightGauge := prometheus.NewGauge(prometheus.GaugeOpts{
Name: "client_in_flight_requests",
Help: "A gauge of in-flight requests for the wrapped client.",
})
counter := prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "client_api_requests_total",
Help: "A counter for requests from the wrapped client.",
},
[]string{"code", "method"},
)
// dnsLatencyVec uses custom buckets based on expected dns durations.
// It has an instance label "event", which is set in the
// DNSStart and DNSDonehook functions defined in the
// InstrumentTrace struct below.
dnsLatencyVec := prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "dns_duration_seconds",
Help: "Trace dns latency histogram.",
Buckets: []float64{.005, .01, .025, .05},
},
[]string{"event"},
)
// tlsLatencyVec uses custom buckets based on expected tls durations.
// It has an instance label "event", which is set in the
// TLSHandshakeStart and TLSHandshakeDone hook functions defined in the
// InstrumentTrace struct below.
tlsLatencyVec := prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "tls_duration_seconds",
Help: "Trace tls latency histogram.",
Buckets: []float64{.05, .1, .25, .5},
},
[]string{"event"},
)
// histVec has no labels, making it a zero-dimensional ObserverVec.
histVec := prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "request_duration_seconds",
Help: "A histogram of request latencies.",
Buckets: prometheus.DefBuckets,
},
[]string{},
)
// Register all of the metrics in the standard registry.
prometheus.MustRegister(counter, tlsLatencyVec, dnsLatencyVec, histVec, inFlightGauge)
// Define functions for the available httptrace.ClientTrace hook
// functions that we want to instrument.
trace := &InstrumentTrace{
DNSStart: func(t float64) {
dnsLatencyVec.WithLabelValues("dns_start")
},
DNSDone: func(t float64) {
dnsLatencyVec.WithLabelValues("dns_done")
},
TLSHandshakeStart: func(t float64) {
tlsLatencyVec.WithLabelValues("tls_handshake_start")
},
TLSHandshakeDone: func(t float64) {
tlsLatencyVec.WithLabelValues("tls_handshake_done")
},
}
// Wrap the default RoundTripper with middleware.
roundTripper := InstrumentRoundTripperInFlight(inFlightGauge,
InstrumentRoundTripperCounter(counter,
InstrumentRoundTripperTrace(trace,
InstrumentRoundTripperDuration(histVec, http.DefaultTransport),
),
),
)
// Set the RoundTripper on our client.
client.Transport = roundTripper
resp, err := client.Get("http://google.com")
if err != nil {
log.Printf("error: %v", err)
}
defer resp.Body.Close()
}

View File

@@ -1,233 +0,0 @@
// Copyright 2017 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package promhttp
import (
"io"
"log"
"net/http"
"net/http/httptest"
"testing"
"github.com/prometheus/client_golang/prometheus"
)
func TestMiddlewareAPI(t *testing.T) {
reg := prometheus.NewRegistry()
inFlightGauge := prometheus.NewGauge(prometheus.GaugeOpts{
Name: "in_flight_requests",
Help: "A gauge of requests currently being served by the wrapped handler.",
})
counter := prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "api_requests_total",
Help: "A counter for requests to the wrapped handler.",
},
[]string{"code", "method"},
)
histVec := prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "response_duration_seconds",
Help: "A histogram of request latencies.",
Buckets: prometheus.DefBuckets,
ConstLabels: prometheus.Labels{"handler": "api"},
},
[]string{"method"},
)
writeHeaderVec := prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "write_header_duration_seconds",
Help: "A histogram of time to first write latencies.",
Buckets: prometheus.DefBuckets,
ConstLabels: prometheus.Labels{"handler": "api"},
},
[]string{},
)
responseSize := prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "push_request_size_bytes",
Help: "A histogram of request sizes for requests.",
Buckets: []float64{200, 500, 900, 1500},
},
[]string{},
)
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("OK"))
})
reg.MustRegister(inFlightGauge, counter, histVec, responseSize, writeHeaderVec)
chain := InstrumentHandlerInFlight(inFlightGauge,
InstrumentHandlerCounter(counter,
InstrumentHandlerDuration(histVec,
InstrumentHandlerTimeToWriteHeader(writeHeaderVec,
InstrumentHandlerResponseSize(responseSize, handler),
),
),
),
)
r, _ := http.NewRequest("GET", "www.example.com", nil)
w := httptest.NewRecorder()
chain.ServeHTTP(w, r)
}
func TestInstrumentTimeToFirstWrite(t *testing.T) {
var i int
dobs := &responseWriterDelegator{
ResponseWriter: httptest.NewRecorder(),
observeWriteHeader: func(status int) {
i = status
},
}
d := newDelegator(dobs, nil)
d.WriteHeader(http.StatusOK)
if i != http.StatusOK {
t.Fatalf("failed to execute observeWriteHeader")
}
}
// testResponseWriter is an http.ResponseWriter that also implements
// http.CloseNotifier, http.Flusher, and io.ReaderFrom.
type testResponseWriter struct {
closeNotifyCalled, flushCalled, readFromCalled bool
}
func (t *testResponseWriter) Header() http.Header { return nil }
func (t *testResponseWriter) Write([]byte) (int, error) { return 0, nil }
func (t *testResponseWriter) WriteHeader(int) {}
func (t *testResponseWriter) CloseNotify() <-chan bool {
t.closeNotifyCalled = true
return nil
}
func (t *testResponseWriter) Flush() { t.flushCalled = true }
func (t *testResponseWriter) ReadFrom(io.Reader) (int64, error) {
t.readFromCalled = true
return 0, nil
}
func TestInterfaceUpgrade(t *testing.T) {
w := &testResponseWriter{}
d := newDelegator(w, nil)
d.(http.CloseNotifier).CloseNotify()
if !w.closeNotifyCalled {
t.Error("CloseNotify not called")
}
d.(http.Flusher).Flush()
if !w.flushCalled {
t.Error("Flush not called")
}
d.(io.ReaderFrom).ReadFrom(nil)
if !w.readFromCalled {
t.Error("ReadFrom not called")
}
if _, ok := d.(http.Hijacker); ok {
t.Error("delegator unexpectedly implements http.Hijacker")
}
}
func ExampleInstrumentHandlerDuration() {
inFlightGauge := prometheus.NewGauge(prometheus.GaugeOpts{
Name: "in_flight_requests",
Help: "A gauge of requests currently being served by the wrapped handler.",
})
counter := prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "api_requests_total",
Help: "A counter for requests to the wrapped handler.",
},
[]string{"code", "method"},
)
// pushVec and pullVec are partitioned by the HTTP method and use custom
// buckets based on the expected request duration. ConstLabels are used
// to set a handler label to mark pushVec as tracking the durations for
// pushes and pullVec as tracking the durations for pulls. Note that
// Name, Help, and Buckets need to be the same for consistency, so we
// use the same HistogramOpts after just modifying the ConstLabels.
histogramOpts := prometheus.HistogramOpts{
Name: "request_duration_seconds",
Help: "A histogram of latencies for requests.",
Buckets: []float64{.25, .5, 1, 2.5, 5, 10},
ConstLabels: prometheus.Labels{"handler": "push"},
}
pushVec := prometheus.NewHistogramVec(
histogramOpts,
[]string{"method"},
)
histogramOpts.ConstLabels = prometheus.Labels{"handler": "pull"}
pullVec := prometheus.NewHistogramVec(
histogramOpts,
[]string{"method"},
)
// responseSize has no labels, making it a zero-dimensional
// ObserverVec.
responseSize := prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "response_size_bytes",
Help: "A histogram of response sizes for requests.",
Buckets: []float64{200, 500, 900, 1500},
},
[]string{},
)
// Create the handlers that will be wrapped by the middleware.
pushHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("Push"))
})
pullHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("Pull"))
})
// Register all of the metrics in the standard registry.
prometheus.MustRegister(inFlightGauge, counter, pullVec, pushVec, responseSize)
// Wrap the pushHandler with our shared middleware, but use the
// endpoint-specific pushVec with InstrumentHandlerDuration.
pushChain := InstrumentHandlerInFlight(inFlightGauge,
InstrumentHandlerCounter(counter,
InstrumentHandlerDuration(pushVec,
InstrumentHandlerResponseSize(responseSize, pushHandler),
),
),
)
// Wrap the pushHandler with the shared middleware, but use the
// endpoint-specific pullVec with InstrumentHandlerDuration.
pullChain := InstrumentHandlerInFlight(inFlightGauge,
InstrumentHandlerCounter(counter,
InstrumentHandlerDuration(pullVec,
InstrumentHandlerResponseSize(responseSize, pullHandler),
),
),
)
http.Handle("/metrics", Handler())
http.Handle("/push", pushChain)
http.Handle("/pull", pullChain)
if err := http.ListenAndServe(":3000", nil); err != nil {
log.Fatal(err)
}
}

View File

@@ -1,84 +0,0 @@
// Copyright 2016 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Copyright (c) 2013, The Prometheus Authors
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be found
// in the LICENSE file.
package push_test
import (
"fmt"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/push"
)
var (
completionTime = prometheus.NewGauge(prometheus.GaugeOpts{
Name: "db_backup_last_completion_timestamp_seconds",
Help: "The timestamp of the last completion of a DB backup, successful or not.",
})
successTime = prometheus.NewGauge(prometheus.GaugeOpts{
Name: "db_backup_last_success_timestamp_seconds",
Help: "The timestamp of the last successful completion of a DB backup.",
})
duration = prometheus.NewGauge(prometheus.GaugeOpts{
Name: "db_backup_duration_seconds",
Help: "The duration of the last DB backup in seconds.",
})
records = prometheus.NewGauge(prometheus.GaugeOpts{
Name: "db_backup_records_processed",
Help: "The number of records processed in the last DB backup.",
})
)
func performBackup() (int, error) {
// Perform the backup and return the number of backed up records and any
// applicable error.
// ...
return 42, nil
}
func ExampleAddFromGatherer() {
registry := prometheus.NewRegistry()
registry.MustRegister(completionTime, duration, records)
// Note that successTime is not registered at this time.
start := time.Now()
n, err := performBackup()
records.Set(float64(n))
// Note that time.Since only uses a monotonic clock in Go1.9+.
duration.Set(time.Since(start).Seconds())
completionTime.SetToCurrentTime()
if err != nil {
fmt.Println("DB backup failed:", err)
} else {
// Only now register successTime.
registry.MustRegister(successTime)
successTime.SetToCurrentTime()
}
// AddFromGatherer is used here rather than FromGatherer to not delete a
// previously pushed success timestamp in case of a failure of this
// backup.
if err := push.AddFromGatherer(
"db_backup", nil,
"http://pushgateway:9091",
registry,
); err != nil {
fmt.Println("Could not push to Pushgateway:", err)
}
}

View File

@@ -1,36 +0,0 @@
// Copyright 2016 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package push_test
import (
"fmt"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/push"
)
func ExampleCollectors() {
completionTime := prometheus.NewGauge(prometheus.GaugeOpts{
Name: "db_backup_last_completion_timestamp_seconds",
Help: "The timestamp of the last successful completion of a DB backup.",
})
completionTime.SetToCurrentTime()
if err := push.Collectors(
"db_backup", push.HostnameGroupingKey(),
"http://pushgateway:9091",
completionTime,
); err != nil {
fmt.Println("Could not push completion time to Pushgateway:", err)
}
}

View File

@@ -1,172 +0,0 @@
// Copyright 2015 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Copyright (c) 2013, The Prometheus Authors
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be found
// in the LICENSE file.
// Package push provides functions to push metrics to a Pushgateway. The metrics
// to push are either collected from a provided registry, or from explicitly
// listed collectors.
//
// See the documentation of the Pushgateway to understand the meaning of the
// grouping parameters and the differences between push.Registry and
// push.Collectors on the one hand and push.AddRegistry and push.AddCollectors
// on the other hand: https://github.com/prometheus/pushgateway
package push
import (
"bytes"
"fmt"
"io/ioutil"
"net/http"
"net/url"
"os"
"strings"
"github.com/prometheus/common/expfmt"
"github.com/prometheus/common/model"
"github.com/prometheus/client_golang/prometheus"
)
const contentTypeHeader = "Content-Type"
// FromGatherer triggers a metric collection by the provided Gatherer (which is
// usually implemented by a prometheus.Registry) and pushes all gathered metrics
// to the Pushgateway specified by url, using the provided job name and the
// (optional) further grouping labels (the grouping map may be nil). See the
// Pushgateway documentation for detailed implications of the job and other
// grouping labels. Neither the job name nor any grouping label value may
// contain a "/". The metrics pushed must not contain a job label of their own
// nor any of the grouping labels.
//
// You can use just host:port or ip:port as url, in which case 'http://' is
// added automatically. You can also include the schema in the URL. However, do
// not include the '/metrics/jobs/...' part.
//
// Note that all previously pushed metrics with the same job and other grouping
// labels will be replaced with the metrics pushed by this call. (It uses HTTP
// method 'PUT' to push to the Pushgateway.)
func FromGatherer(job string, grouping map[string]string, url string, g prometheus.Gatherer) error {
return push(job, grouping, url, g, "PUT")
}
// AddFromGatherer works like FromGatherer, but only previously pushed metrics
// with the same name (and the same job and other grouping labels) will be
// replaced. (It uses HTTP method 'POST' to push to the Pushgateway.)
func AddFromGatherer(job string, grouping map[string]string, url string, g prometheus.Gatherer) error {
return push(job, grouping, url, g, "POST")
}
func push(job string, grouping map[string]string, pushURL string, g prometheus.Gatherer, method string) error {
if !strings.Contains(pushURL, "://") {
pushURL = "http://" + pushURL
}
if strings.HasSuffix(pushURL, "/") {
pushURL = pushURL[:len(pushURL)-1]
}
if strings.Contains(job, "/") {
return fmt.Errorf("job contains '/': %s", job)
}
urlComponents := []string{url.QueryEscape(job)}
for ln, lv := range grouping {
if !model.LabelName(ln).IsValid() {
return fmt.Errorf("grouping label has invalid name: %s", ln)
}
if strings.Contains(lv, "/") {
return fmt.Errorf("value of grouping label %s contains '/': %s", ln, lv)
}
urlComponents = append(urlComponents, ln, lv)
}
pushURL = fmt.Sprintf("%s/metrics/job/%s", pushURL, strings.Join(urlComponents, "/"))
mfs, err := g.Gather()
if err != nil {
return err
}
buf := &bytes.Buffer{}
enc := expfmt.NewEncoder(buf, expfmt.FmtProtoDelim)
// Check for pre-existing grouping labels:
for _, mf := range mfs {
for _, m := range mf.GetMetric() {
for _, l := range m.GetLabel() {
if l.GetName() == "job" {
return fmt.Errorf("pushed metric %s (%s) already contains a job label", mf.GetName(), m)
}
if _, ok := grouping[l.GetName()]; ok {
return fmt.Errorf(
"pushed metric %s (%s) already contains grouping label %s",
mf.GetName(), m, l.GetName(),
)
}
}
}
enc.Encode(mf)
}
req, err := http.NewRequest(method, pushURL, buf)
if err != nil {
return err
}
req.Header.Set(contentTypeHeader, string(expfmt.FmtProtoDelim))
resp, err := http.DefaultClient.Do(req)
if err != nil {
return err
}
defer resp.Body.Close()
if resp.StatusCode != 202 {
body, _ := ioutil.ReadAll(resp.Body) // Ignore any further error as this is for an error message only.
return fmt.Errorf("unexpected status code %d while pushing to %s: %s", resp.StatusCode, pushURL, body)
}
return nil
}
// Collectors works like FromGatherer, but it does not use a Gatherer. Instead,
// it collects from the provided collectors directly. It is a convenient way to
// push only a few metrics.
func Collectors(job string, grouping map[string]string, url string, collectors ...prometheus.Collector) error {
return pushCollectors(job, grouping, url, "PUT", collectors...)
}
// AddCollectors works like AddFromGatherer, but it does not use a Gatherer.
// Instead, it collects from the provided collectors directly. It is a
// convenient way to push only a few metrics.
func AddCollectors(job string, grouping map[string]string, url string, collectors ...prometheus.Collector) error {
return pushCollectors(job, grouping, url, "POST", collectors...)
}
func pushCollectors(job string, grouping map[string]string, url, method string, collectors ...prometheus.Collector) error {
r := prometheus.NewRegistry()
for _, collector := range collectors {
if err := r.Register(collector); err != nil {
return err
}
}
return push(job, grouping, url, r, method)
}
// HostnameGroupingKey returns a label map with the only entry
// {instance="<hostname>"}. This can be conveniently used as the grouping
// parameter if metrics should be pushed with the hostname as label. The
// returned map is created upon each call so that the caller is free to add more
// labels to the map.
func HostnameGroupingKey() map[string]string {
hostname, err := os.Hostname()
if err != nil {
return map[string]string{"instance": "unknown"}
}
return map[string]string{"instance": hostname}
}

View File

@@ -1,176 +0,0 @@
// Copyright 2016 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Copyright (c) 2013, The Prometheus Authors
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be found
// in the LICENSE file.
package push
import (
"bytes"
"io/ioutil"
"net/http"
"net/http/httptest"
"os"
"testing"
"github.com/prometheus/common/expfmt"
"github.com/prometheus/client_golang/prometheus"
)
func TestPush(t *testing.T) {
var (
lastMethod string
lastBody []byte
lastPath string
)
host, err := os.Hostname()
if err != nil {
t.Error(err)
}
// Fake a Pushgateway that always responds with 202.
pgwOK := httptest.NewServer(
http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
lastMethod = r.Method
var err error
lastBody, err = ioutil.ReadAll(r.Body)
if err != nil {
t.Fatal(err)
}
lastPath = r.URL.EscapedPath()
w.Header().Set("Content-Type", `text/plain; charset=utf-8`)
w.WriteHeader(http.StatusAccepted)
}),
)
defer pgwOK.Close()
// Fake a Pushgateway that always responds with 500.
pgwErr := httptest.NewServer(
http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
http.Error(w, "fake error", http.StatusInternalServerError)
}),
)
defer pgwErr.Close()
metric1 := prometheus.NewCounter(prometheus.CounterOpts{
Name: "testname1",
Help: "testhelp1",
})
metric2 := prometheus.NewGauge(prometheus.GaugeOpts{
Name: "testname2",
Help: "testhelp2",
ConstLabels: prometheus.Labels{"foo": "bar", "dings": "bums"},
})
reg := prometheus.NewRegistry()
reg.MustRegister(metric1)
reg.MustRegister(metric2)
mfs, err := reg.Gather()
if err != nil {
t.Fatal(err)
}
buf := &bytes.Buffer{}
enc := expfmt.NewEncoder(buf, expfmt.FmtProtoDelim)
for _, mf := range mfs {
if err := enc.Encode(mf); err != nil {
t.Fatal(err)
}
}
wantBody := buf.Bytes()
// PushCollectors, all good.
if err := Collectors("testjob", HostnameGroupingKey(), pgwOK.URL, metric1, metric2); err != nil {
t.Fatal(err)
}
if lastMethod != "PUT" {
t.Error("want method PUT for PushCollectors, got", lastMethod)
}
if bytes.Compare(lastBody, wantBody) != 0 {
t.Errorf("got body %v, want %v", lastBody, wantBody)
}
if lastPath != "/metrics/job/testjob/instance/"+host {
t.Error("unexpected path:", lastPath)
}
// PushAddCollectors, with nil grouping, all good.
if err := AddCollectors("testjob", nil, pgwOK.URL, metric1, metric2); err != nil {
t.Fatal(err)
}
if lastMethod != "POST" {
t.Error("want method POST for PushAddCollectors, got", lastMethod)
}
if bytes.Compare(lastBody, wantBody) != 0 {
t.Errorf("got body %v, want %v", lastBody, wantBody)
}
if lastPath != "/metrics/job/testjob" {
t.Error("unexpected path:", lastPath)
}
// PushCollectors with a broken PGW.
if err := Collectors("testjob", nil, pgwErr.URL, metric1, metric2); err == nil {
t.Error("push to broken Pushgateway succeeded")
} else {
if got, want := err.Error(), "unexpected status code 500 while pushing to "+pgwErr.URL+"/metrics/job/testjob: fake error\n"; got != want {
t.Errorf("got error %q, want %q", got, want)
}
}
// PushCollectors with invalid grouping or job.
if err := Collectors("testjob", map[string]string{"foo": "bums"}, pgwErr.URL, metric1, metric2); err == nil {
t.Error("push with grouping contained in metrics succeeded")
}
if err := Collectors("test/job", nil, pgwErr.URL, metric1, metric2); err == nil {
t.Error("push with invalid job value succeeded")
}
if err := Collectors("testjob", map[string]string{"foo/bar": "bums"}, pgwErr.URL, metric1, metric2); err == nil {
t.Error("push with invalid grouping succeeded")
}
if err := Collectors("testjob", map[string]string{"foo-bar": "bums"}, pgwErr.URL, metric1, metric2); err == nil {
t.Error("push with invalid grouping succeeded")
}
// Push registry, all good.
if err := FromGatherer("testjob", HostnameGroupingKey(), pgwOK.URL, reg); err != nil {
t.Fatal(err)
}
if lastMethod != "PUT" {
t.Error("want method PUT for Push, got", lastMethod)
}
if bytes.Compare(lastBody, wantBody) != 0 {
t.Errorf("got body %v, want %v", lastBody, wantBody)
}
// PushAdd registry, all good.
if err := AddFromGatherer("testjob", map[string]string{"a": "x", "b": "y"}, pgwOK.URL, reg); err != nil {
t.Fatal(err)
}
if lastMethod != "POST" {
t.Error("want method POSTT for PushAdd, got", lastMethod)
}
if bytes.Compare(lastBody, wantBody) != 0 {
t.Errorf("got body %v, want %v", lastBody, wantBody)
}
if lastPath != "/metrics/job/testjob/a/x/b/y" && lastPath != "/metrics/job/testjob/b/y/a/x" {
t.Error("unexpected path:", lastPath)
}
}

View File

@@ -1,590 +0,0 @@
// Copyright 2014 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Copyright (c) 2013, The Prometheus Authors
// All rights reserved.
//
// Use of this source code is governed by a BSD-style license that can be found
// in the LICENSE file.
package prometheus_test
import (
"bytes"
"net/http"
"net/http/httptest"
"testing"
dto "github.com/prometheus/client_model/go"
"github.com/golang/protobuf/proto"
"github.com/prometheus/common/expfmt"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
)
func testHandler(t testing.TB) {
metricVec := prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "name",
Help: "docstring",
ConstLabels: prometheus.Labels{"constname": "constvalue"},
},
[]string{"labelname"},
)
metricVec.WithLabelValues("val1").Inc()
metricVec.WithLabelValues("val2").Inc()
externalMetricFamily := &dto.MetricFamily{
Name: proto.String("externalname"),
Help: proto.String("externaldocstring"),
Type: dto.MetricType_COUNTER.Enum(),
Metric: []*dto.Metric{
{
Label: []*dto.LabelPair{
{
Name: proto.String("externalconstname"),
Value: proto.String("externalconstvalue"),
},
{
Name: proto.String("externallabelname"),
Value: proto.String("externalval1"),
},
},
Counter: &dto.Counter{
Value: proto.Float64(1),
},
},
},
}
externalBuf := &bytes.Buffer{}
enc := expfmt.NewEncoder(externalBuf, expfmt.FmtProtoDelim)
if err := enc.Encode(externalMetricFamily); err != nil {
t.Fatal(err)
}
externalMetricFamilyAsBytes := externalBuf.Bytes()
externalMetricFamilyAsText := []byte(`# HELP externalname externaldocstring
# TYPE externalname counter
externalname{externalconstname="externalconstvalue",externallabelname="externalval1"} 1
`)
externalMetricFamilyAsProtoText := []byte(`name: "externalname"
help: "externaldocstring"
type: COUNTER
metric: <
label: <
name: "externalconstname"
value: "externalconstvalue"
>
label: <
name: "externallabelname"
value: "externalval1"
>
counter: <
value: 1
>
>
`)
externalMetricFamilyAsProtoCompactText := []byte(`name:"externalname" help:"externaldocstring" type:COUNTER metric:<label:<name:"externalconstname" value:"externalconstvalue" > label:<name:"externallabelname" value:"externalval1" > counter:<value:1 > >
`)
expectedMetricFamily := &dto.MetricFamily{
Name: proto.String("name"),
Help: proto.String("docstring"),
Type: dto.MetricType_COUNTER.Enum(),
Metric: []*dto.Metric{
{
Label: []*dto.LabelPair{
{
Name: proto.String("constname"),
Value: proto.String("constvalue"),
},
{
Name: proto.String("labelname"),
Value: proto.String("val1"),
},
},
Counter: &dto.Counter{
Value: proto.Float64(1),
},
},
{
Label: []*dto.LabelPair{
{
Name: proto.String("constname"),
Value: proto.String("constvalue"),
},
{
Name: proto.String("labelname"),
Value: proto.String("val2"),
},
},
Counter: &dto.Counter{
Value: proto.Float64(1),
},
},
},
}
buf := &bytes.Buffer{}
enc = expfmt.NewEncoder(buf, expfmt.FmtProtoDelim)
if err := enc.Encode(expectedMetricFamily); err != nil {
t.Fatal(err)
}
expectedMetricFamilyAsBytes := buf.Bytes()
expectedMetricFamilyAsText := []byte(`# HELP name docstring
# TYPE name counter
name{constname="constvalue",labelname="val1"} 1
name{constname="constvalue",labelname="val2"} 1
`)
expectedMetricFamilyAsProtoText := []byte(`name: "name"
help: "docstring"
type: COUNTER
metric: <
label: <
name: "constname"
value: "constvalue"
>
label: <
name: "labelname"
value: "val1"
>
counter: <
value: 1
>
>
metric: <
label: <
name: "constname"
value: "constvalue"
>
label: <
name: "labelname"
value: "val2"
>
counter: <
value: 1
>
>
`)
expectedMetricFamilyAsProtoCompactText := []byte(`name:"name" help:"docstring" type:COUNTER metric:<label:<name:"constname" value:"constvalue" > label:<name:"labelname" value:"val1" > counter:<value:1 > > metric:<label:<name:"constname" value:"constvalue" > label:<name:"labelname" value:"val2" > counter:<value:1 > >
`)
externalMetricFamilyWithSameName := &dto.MetricFamily{
Name: proto.String("name"),
Help: proto.String("docstring"),
Type: dto.MetricType_COUNTER.Enum(),
Metric: []*dto.Metric{
{
Label: []*dto.LabelPair{
{
Name: proto.String("constname"),
Value: proto.String("constvalue"),
},
{
Name: proto.String("labelname"),
Value: proto.String("different_val"),
},
},
Counter: &dto.Counter{
Value: proto.Float64(42),
},
},
},
}
expectedMetricFamilyMergedWithExternalAsProtoCompactText := []byte(`name:"name" help:"docstring" type:COUNTER metric:<label:<name:"constname" value:"constvalue" > label:<name:"labelname" value:"different_val" > counter:<value:42 > > metric:<label:<name:"constname" value:"constvalue" > label:<name:"labelname" value:"val1" > counter:<value:1 > > metric:<label:<name:"constname" value:"constvalue" > label:<name:"labelname" value:"val2" > counter:<value:1 > >
`)
externalMetricFamilyWithInvalidLabelValue := &dto.MetricFamily{
Name: proto.String("name"),
Help: proto.String("docstring"),
Type: dto.MetricType_COUNTER.Enum(),
Metric: []*dto.Metric{
{
Label: []*dto.LabelPair{
{
Name: proto.String("constname"),
Value: proto.String("\xFF"),
},
{
Name: proto.String("labelname"),
Value: proto.String("different_val"),
},
},
Counter: &dto.Counter{
Value: proto.Float64(42),
},
},
},
}
expectedMetricFamilyInvalidLabelValueAsText := []byte(`An error has occurred during metrics gathering:
collected metric's label constname is not utf8: "\xff"
`)
type output struct {
headers map[string]string
body []byte
}
var scenarios = []struct {
headers map[string]string
out output
collector prometheus.Collector
externalMF []*dto.MetricFamily
}{
{ // 0
headers: map[string]string{
"Accept": "foo/bar;q=0.2, dings/bums;q=0.8",
},
out: output{
headers: map[string]string{
"Content-Type": `text/plain; version=0.0.4`,
},
body: []byte{},
},
},
{ // 1
headers: map[string]string{
"Accept": "foo/bar;q=0.2, application/quark;q=0.8",
},
out: output{
headers: map[string]string{
"Content-Type": `text/plain; version=0.0.4`,
},
body: []byte{},
},
},
{ // 2
headers: map[string]string{
"Accept": "foo/bar;q=0.2, application/vnd.google.protobuf;proto=io.prometheus.client.MetricFamily;encoding=bla;q=0.8",
},
out: output{
headers: map[string]string{
"Content-Type": `text/plain; version=0.0.4`,
},
body: []byte{},
},
},
{ // 3
headers: map[string]string{
"Accept": "text/plain;q=0.2, application/vnd.google.protobuf;proto=io.prometheus.client.MetricFamily;encoding=delimited;q=0.8",
},
out: output{
headers: map[string]string{
"Content-Type": `application/vnd.google.protobuf; proto=io.prometheus.client.MetricFamily; encoding=delimited`,
},
body: []byte{},
},
},
{ // 4
headers: map[string]string{
"Accept": "application/json",
},
out: output{
headers: map[string]string{
"Content-Type": `text/plain; version=0.0.4`,
},
body: expectedMetricFamilyAsText,
},
collector: metricVec,
},
{ // 5
headers: map[string]string{
"Accept": "application/vnd.google.protobuf;proto=io.prometheus.client.MetricFamily;encoding=delimited",
},
out: output{
headers: map[string]string{
"Content-Type": `application/vnd.google.protobuf; proto=io.prometheus.client.MetricFamily; encoding=delimited`,
},
body: expectedMetricFamilyAsBytes,
},
collector: metricVec,
},
{ // 6
headers: map[string]string{
"Accept": "application/json",
},
out: output{
headers: map[string]string{
"Content-Type": `text/plain; version=0.0.4`,
},
body: externalMetricFamilyAsText,
},
externalMF: []*dto.MetricFamily{externalMetricFamily},
},
{ // 7
headers: map[string]string{
"Accept": "application/vnd.google.protobuf;proto=io.prometheus.client.MetricFamily;encoding=delimited",
},
out: output{
headers: map[string]string{
"Content-Type": `application/vnd.google.protobuf; proto=io.prometheus.client.MetricFamily; encoding=delimited`,
},
body: externalMetricFamilyAsBytes,
},
externalMF: []*dto.MetricFamily{externalMetricFamily},
},
{ // 8
headers: map[string]string{
"Accept": "application/vnd.google.protobuf;proto=io.prometheus.client.MetricFamily;encoding=delimited",
},
out: output{
headers: map[string]string{
"Content-Type": `application/vnd.google.protobuf; proto=io.prometheus.client.MetricFamily; encoding=delimited`,
},
body: bytes.Join(
[][]byte{
externalMetricFamilyAsBytes,
expectedMetricFamilyAsBytes,
},
[]byte{},
),
},
collector: metricVec,
externalMF: []*dto.MetricFamily{externalMetricFamily},
},
{ // 9
headers: map[string]string{
"Accept": "text/plain",
},
out: output{
headers: map[string]string{
"Content-Type": `text/plain; version=0.0.4`,
},
body: []byte{},
},
},
{ // 10
headers: map[string]string{
"Accept": "application/vnd.google.protobuf;proto=io.prometheus.client.MetricFamily;encoding=bla;q=0.2, text/plain;q=0.5",
},
out: output{
headers: map[string]string{
"Content-Type": `text/plain; version=0.0.4`,
},
body: expectedMetricFamilyAsText,
},
collector: metricVec,
},
{ // 11
headers: map[string]string{
"Accept": "application/vnd.google.protobuf;proto=io.prometheus.client.MetricFamily;encoding=bla;q=0.2, text/plain;q=0.5;version=0.0.4",
},
out: output{
headers: map[string]string{
"Content-Type": `text/plain; version=0.0.4`,
},
body: bytes.Join(
[][]byte{
externalMetricFamilyAsText,
expectedMetricFamilyAsText,
},
[]byte{},
),
},
collector: metricVec,
externalMF: []*dto.MetricFamily{externalMetricFamily},
},
{ // 12
headers: map[string]string{
"Accept": "application/vnd.google.protobuf;proto=io.prometheus.client.MetricFamily;encoding=delimited;q=0.2, text/plain;q=0.5;version=0.0.2",
},
out: output{
headers: map[string]string{
"Content-Type": `application/vnd.google.protobuf; proto=io.prometheus.client.MetricFamily; encoding=delimited`,
},
body: bytes.Join(
[][]byte{
externalMetricFamilyAsBytes,
expectedMetricFamilyAsBytes,
},
[]byte{},
),
},
collector: metricVec,
externalMF: []*dto.MetricFamily{externalMetricFamily},
},
{ // 13
headers: map[string]string{
"Accept": "application/vnd.google.protobuf;proto=io.prometheus.client.MetricFamily;encoding=text;q=0.5, application/vnd.google.protobuf;proto=io.prometheus.client.MetricFamily;encoding=delimited;q=0.4",
},
out: output{
headers: map[string]string{
"Content-Type": `application/vnd.google.protobuf; proto=io.prometheus.client.MetricFamily; encoding=text`,
},
body: bytes.Join(
[][]byte{
externalMetricFamilyAsProtoText,
expectedMetricFamilyAsProtoText,
},
[]byte{},
),
},
collector: metricVec,
externalMF: []*dto.MetricFamily{externalMetricFamily},
},
{ // 14
headers: map[string]string{
"Accept": "application/vnd.google.protobuf;proto=io.prometheus.client.MetricFamily;encoding=compact-text",
},
out: output{
headers: map[string]string{
"Content-Type": `application/vnd.google.protobuf; proto=io.prometheus.client.MetricFamily; encoding=compact-text`,
},
body: bytes.Join(
[][]byte{
externalMetricFamilyAsProtoCompactText,
expectedMetricFamilyAsProtoCompactText,
},
[]byte{},
),
},
collector: metricVec,
externalMF: []*dto.MetricFamily{externalMetricFamily},
},
{ // 15
headers: map[string]string{
"Accept": "application/vnd.google.protobuf;proto=io.prometheus.client.MetricFamily;encoding=compact-text",
},
out: output{
headers: map[string]string{
"Content-Type": `application/vnd.google.protobuf; proto=io.prometheus.client.MetricFamily; encoding=compact-text`,
},
body: bytes.Join(
[][]byte{
externalMetricFamilyAsProtoCompactText,
expectedMetricFamilyMergedWithExternalAsProtoCompactText,
},
[]byte{},
),
},
collector: metricVec,
externalMF: []*dto.MetricFamily{
externalMetricFamily,
externalMetricFamilyWithSameName,
},
},
{ // 16
headers: map[string]string{
"Accept": "application/vnd.google.protobuf;proto=io.prometheus.client.MetricFamily;encoding=compact-text",
},
out: output{
headers: map[string]string{
"Content-Type": `text/plain; charset=utf-8`,
},
body: expectedMetricFamilyInvalidLabelValueAsText,
},
collector: metricVec,
externalMF: []*dto.MetricFamily{
externalMetricFamily,
externalMetricFamilyWithInvalidLabelValue,
},
},
}
for i, scenario := range scenarios {
registry := prometheus.NewPedanticRegistry()
gatherer := prometheus.Gatherer(registry)
if scenario.externalMF != nil {
gatherer = prometheus.Gatherers{
registry,
prometheus.GathererFunc(func() ([]*dto.MetricFamily, error) {
return scenario.externalMF, nil
}),
}
}
if scenario.collector != nil {
registry.Register(scenario.collector)
}
writer := httptest.NewRecorder()
handler := prometheus.InstrumentHandler("prometheus", promhttp.HandlerFor(gatherer, promhttp.HandlerOpts{}))
request, _ := http.NewRequest("GET", "/", nil)
for key, value := range scenario.headers {
request.Header.Add(key, value)
}
handler(writer, request)
for key, value := range scenario.out.headers {
if writer.HeaderMap.Get(key) != value {
t.Errorf(
"%d. expected %q for header %q, got %q",
i, value, key, writer.Header().Get(key),
)
}
}
if !bytes.Equal(scenario.out.body, writer.Body.Bytes()) {
t.Errorf(
"%d. expected body:\n%s\ngot body:\n%s\n",
i, scenario.out.body, writer.Body.Bytes(),
)
}
}
}
func TestHandler(t *testing.T) {
testHandler(t)
}
func BenchmarkHandler(b *testing.B) {
for i := 0; i < b.N; i++ {
testHandler(b)
}
}
func TestRegisterWithOrGet(t *testing.T) {
// Replace the default registerer just to be sure. This is bad, but this
// whole test will go away once RegisterOrGet is removed.
oldRegisterer := prometheus.DefaultRegisterer
defer func() {
prometheus.DefaultRegisterer = oldRegisterer
}()
prometheus.DefaultRegisterer = prometheus.NewRegistry()
original := prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "test",
Help: "help",
},
[]string{"foo", "bar"},
)
equalButNotSame := prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "test",
Help: "help",
},
[]string{"foo", "bar"},
)
var err error
if err = prometheus.Register(original); err != nil {
t.Fatal(err)
}
if err = prometheus.Register(equalButNotSame); err == nil {
t.Fatal("expected error when registringe equal collector")
}
if are, ok := err.(prometheus.AlreadyRegisteredError); ok {
if are.ExistingCollector != original {
t.Error("expected original collector but got something else")
}
if are.ExistingCollector == equalButNotSame {
t.Error("expected original callector but got new one")
}
} else {
t.Error("unexpected error:", err)
}
}

View File

@@ -1,388 +0,0 @@
// Copyright 2014 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus
import (
"math"
"math/rand"
"sort"
"sync"
"testing"
"testing/quick"
"time"
dto "github.com/prometheus/client_model/go"
)
func TestSummaryWithDefaultObjectives(t *testing.T) {
reg := NewRegistry()
summaryWithDefaultObjectives := NewSummary(SummaryOpts{
Name: "default_objectives",
Help: "Test help.",
})
if err := reg.Register(summaryWithDefaultObjectives); err != nil {
t.Error(err)
}
m := &dto.Metric{}
if err := summaryWithDefaultObjectives.Write(m); err != nil {
t.Error(err)
}
if len(m.GetSummary().Quantile) != len(DefObjectives) {
t.Error("expected default objectives in summary")
}
}
func TestSummaryWithoutObjectives(t *testing.T) {
reg := NewRegistry()
summaryWithEmptyObjectives := NewSummary(SummaryOpts{
Name: "empty_objectives",
Help: "Test help.",
Objectives: map[float64]float64{},
})
if err := reg.Register(summaryWithEmptyObjectives); err != nil {
t.Error(err)
}
m := &dto.Metric{}
if err := summaryWithEmptyObjectives.Write(m); err != nil {
t.Error(err)
}
if len(m.GetSummary().Quantile) != 0 {
t.Error("expected no objectives in summary")
}
}
func benchmarkSummaryObserve(w int, b *testing.B) {
b.StopTimer()
wg := new(sync.WaitGroup)
wg.Add(w)
g := new(sync.WaitGroup)
g.Add(1)
s := NewSummary(SummaryOpts{})
for i := 0; i < w; i++ {
go func() {
g.Wait()
for i := 0; i < b.N; i++ {
s.Observe(float64(i))
}
wg.Done()
}()
}
b.StartTimer()
g.Done()
wg.Wait()
}
func BenchmarkSummaryObserve1(b *testing.B) {
benchmarkSummaryObserve(1, b)
}
func BenchmarkSummaryObserve2(b *testing.B) {
benchmarkSummaryObserve(2, b)
}
func BenchmarkSummaryObserve4(b *testing.B) {
benchmarkSummaryObserve(4, b)
}
func BenchmarkSummaryObserve8(b *testing.B) {
benchmarkSummaryObserve(8, b)
}
func benchmarkSummaryWrite(w int, b *testing.B) {
b.StopTimer()
wg := new(sync.WaitGroup)
wg.Add(w)
g := new(sync.WaitGroup)
g.Add(1)
s := NewSummary(SummaryOpts{})
for i := 0; i < 1000000; i++ {
s.Observe(float64(i))
}
for j := 0; j < w; j++ {
outs := make([]dto.Metric, b.N)
go func(o []dto.Metric) {
g.Wait()
for i := 0; i < b.N; i++ {
s.Write(&o[i])
}
wg.Done()
}(outs)
}
b.StartTimer()
g.Done()
wg.Wait()
}
func BenchmarkSummaryWrite1(b *testing.B) {
benchmarkSummaryWrite(1, b)
}
func BenchmarkSummaryWrite2(b *testing.B) {
benchmarkSummaryWrite(2, b)
}
func BenchmarkSummaryWrite4(b *testing.B) {
benchmarkSummaryWrite(4, b)
}
func BenchmarkSummaryWrite8(b *testing.B) {
benchmarkSummaryWrite(8, b)
}
func TestSummaryConcurrency(t *testing.T) {
if testing.Short() {
t.Skip("Skipping test in short mode.")
}
rand.Seed(42)
it := func(n uint32) bool {
mutations := int(n%1e4 + 1e4)
concLevel := int(n%5 + 1)
total := mutations * concLevel
var start, end sync.WaitGroup
start.Add(1)
end.Add(concLevel)
sum := NewSummary(SummaryOpts{
Name: "test_summary",
Help: "helpless",
Objectives: map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001},
})
allVars := make([]float64, total)
var sampleSum float64
for i := 0; i < concLevel; i++ {
vals := make([]float64, mutations)
for j := 0; j < mutations; j++ {
v := rand.NormFloat64()
vals[j] = v
allVars[i*mutations+j] = v
sampleSum += v
}
go func(vals []float64) {
start.Wait()
for _, v := range vals {
sum.Observe(v)
}
end.Done()
}(vals)
}
sort.Float64s(allVars)
start.Done()
end.Wait()
m := &dto.Metric{}
sum.Write(m)
if got, want := int(*m.Summary.SampleCount), total; got != want {
t.Errorf("got sample count %d, want %d", got, want)
}
if got, want := *m.Summary.SampleSum, sampleSum; math.Abs((got-want)/want) > 0.001 {
t.Errorf("got sample sum %f, want %f", got, want)
}
objectives := make([]float64, 0, len(DefObjectives))
for qu := range DefObjectives {
objectives = append(objectives, qu)
}
sort.Float64s(objectives)
for i, wantQ := range objectives {
ε := DefObjectives[wantQ]
gotQ := *m.Summary.Quantile[i].Quantile
gotV := *m.Summary.Quantile[i].Value
min, max := getBounds(allVars, wantQ, ε)
if gotQ != wantQ {
t.Errorf("got quantile %f, want %f", gotQ, wantQ)
}
if gotV < min || gotV > max {
t.Errorf("got %f for quantile %f, want [%f,%f]", gotV, gotQ, min, max)
}
}
return true
}
if err := quick.Check(it, nil); err != nil {
t.Error(err)
}
}
func TestSummaryVecConcurrency(t *testing.T) {
if testing.Short() {
t.Skip("Skipping test in short mode.")
}
rand.Seed(42)
objectives := make([]float64, 0, len(DefObjectives))
for qu := range DefObjectives {
objectives = append(objectives, qu)
}
sort.Float64s(objectives)
it := func(n uint32) bool {
mutations := int(n%1e4 + 1e4)
concLevel := int(n%7 + 1)
vecLength := int(n%3 + 1)
var start, end sync.WaitGroup
start.Add(1)
end.Add(concLevel)
sum := NewSummaryVec(
SummaryOpts{
Name: "test_summary",
Help: "helpless",
Objectives: map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001},
},
[]string{"label"},
)
allVars := make([][]float64, vecLength)
sampleSums := make([]float64, vecLength)
for i := 0; i < concLevel; i++ {
vals := make([]float64, mutations)
picks := make([]int, mutations)
for j := 0; j < mutations; j++ {
v := rand.NormFloat64()
vals[j] = v
pick := rand.Intn(vecLength)
picks[j] = pick
allVars[pick] = append(allVars[pick], v)
sampleSums[pick] += v
}
go func(vals []float64) {
start.Wait()
for i, v := range vals {
sum.WithLabelValues(string('A' + picks[i])).Observe(v)
}
end.Done()
}(vals)
}
for _, vars := range allVars {
sort.Float64s(vars)
}
start.Done()
end.Wait()
for i := 0; i < vecLength; i++ {
m := &dto.Metric{}
s := sum.WithLabelValues(string('A' + i))
s.(Summary).Write(m)
if got, want := int(*m.Summary.SampleCount), len(allVars[i]); got != want {
t.Errorf("got sample count %d for label %c, want %d", got, 'A'+i, want)
}
if got, want := *m.Summary.SampleSum, sampleSums[i]; math.Abs((got-want)/want) > 0.001 {
t.Errorf("got sample sum %f for label %c, want %f", got, 'A'+i, want)
}
for j, wantQ := range objectives {
ε := DefObjectives[wantQ]
gotQ := *m.Summary.Quantile[j].Quantile
gotV := *m.Summary.Quantile[j].Value
min, max := getBounds(allVars[i], wantQ, ε)
if gotQ != wantQ {
t.Errorf("got quantile %f for label %c, want %f", gotQ, 'A'+i, wantQ)
}
if gotV < min || gotV > max {
t.Errorf("got %f for quantile %f for label %c, want [%f,%f]", gotV, gotQ, 'A'+i, min, max)
}
}
}
return true
}
if err := quick.Check(it, nil); err != nil {
t.Error(err)
}
}
func TestSummaryDecay(t *testing.T) {
if testing.Short() {
t.Skip("Skipping test in short mode.")
// More because it depends on timing than because it is particularly long...
}
sum := NewSummary(SummaryOpts{
Name: "test_summary",
Help: "helpless",
MaxAge: 100 * time.Millisecond,
Objectives: map[float64]float64{0.1: 0.001},
AgeBuckets: 10,
})
m := &dto.Metric{}
i := 0
tick := time.NewTicker(time.Millisecond)
for range tick.C {
i++
sum.Observe(float64(i))
if i%10 == 0 {
sum.Write(m)
if got, want := *m.Summary.Quantile[0].Value, math.Max(float64(i)/10, float64(i-90)); math.Abs(got-want) > 20 {
t.Errorf("%d. got %f, want %f", i, got, want)
}
m.Reset()
}
if i >= 1000 {
break
}
}
tick.Stop()
// Wait for MaxAge without observations and make sure quantiles are NaN.
time.Sleep(100 * time.Millisecond)
sum.Write(m)
if got := *m.Summary.Quantile[0].Value; !math.IsNaN(got) {
t.Errorf("got %f, want NaN after expiration", got)
}
}
func getBounds(vars []float64, q, ε float64) (min, max float64) {
// TODO(beorn7): This currently tolerates an error of up to 2*ε. The
// error must be at most ε, but for some reason, it's sometimes slightly
// higher. That's a bug.
n := float64(len(vars))
lower := int((q - 2*ε) * n)
upper := int(math.Ceil((q + 2*ε) * n))
min = vars[0]
if lower > 1 {
min = vars[lower-1]
}
max = vars[len(vars)-1]
if upper < len(vars) {
max = vars[upper-1]
}
return
}

View File

@@ -1,152 +0,0 @@
// Copyright 2016 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus
import (
"testing"
dto "github.com/prometheus/client_model/go"
)
func TestTimerObserve(t *testing.T) {
var (
his = NewHistogram(HistogramOpts{Name: "test_histogram"})
sum = NewSummary(SummaryOpts{Name: "test_summary"})
gauge = NewGauge(GaugeOpts{Name: "test_gauge"})
)
func() {
hisTimer := NewTimer(his)
sumTimer := NewTimer(sum)
gaugeTimer := NewTimer(ObserverFunc(gauge.Set))
defer hisTimer.ObserveDuration()
defer sumTimer.ObserveDuration()
defer gaugeTimer.ObserveDuration()
}()
m := &dto.Metric{}
his.Write(m)
if want, got := uint64(1), m.GetHistogram().GetSampleCount(); want != got {
t.Errorf("want %d observations for histogram, got %d", want, got)
}
m.Reset()
sum.Write(m)
if want, got := uint64(1), m.GetSummary().GetSampleCount(); want != got {
t.Errorf("want %d observations for summary, got %d", want, got)
}
m.Reset()
gauge.Write(m)
if got := m.GetGauge().GetValue(); got <= 0 {
t.Errorf("want value > 0 for gauge, got %f", got)
}
}
func TestTimerEmpty(t *testing.T) {
emptyTimer := NewTimer(nil)
emptyTimer.ObserveDuration()
// Do nothing, just demonstrate it works without panic.
}
func TestTimerConditionalTiming(t *testing.T) {
var (
his = NewHistogram(HistogramOpts{
Name: "test_histogram",
})
timeMe = true
m = &dto.Metric{}
)
timedFunc := func() {
timer := NewTimer(ObserverFunc(func(v float64) {
if timeMe {
his.Observe(v)
}
}))
defer timer.ObserveDuration()
}
timedFunc() // This will time.
his.Write(m)
if want, got := uint64(1), m.GetHistogram().GetSampleCount(); want != got {
t.Errorf("want %d observations for histogram, got %d", want, got)
}
timeMe = false
timedFunc() // This will not time again.
m.Reset()
his.Write(m)
if want, got := uint64(1), m.GetHistogram().GetSampleCount(); want != got {
t.Errorf("want %d observations for histogram, got %d", want, got)
}
}
func TestTimerByOutcome(t *testing.T) {
var (
his = NewHistogramVec(
HistogramOpts{Name: "test_histogram"},
[]string{"outcome"},
)
outcome = "foo"
m = &dto.Metric{}
)
timedFunc := func() {
timer := NewTimer(ObserverFunc(func(v float64) {
his.WithLabelValues(outcome).Observe(v)
}))
defer timer.ObserveDuration()
if outcome == "foo" {
outcome = "bar"
return
}
outcome = "foo"
}
timedFunc()
his.WithLabelValues("foo").(Histogram).Write(m)
if want, got := uint64(0), m.GetHistogram().GetSampleCount(); want != got {
t.Errorf("want %d observations for 'foo' histogram, got %d", want, got)
}
m.Reset()
his.WithLabelValues("bar").(Histogram).Write(m)
if want, got := uint64(1), m.GetHistogram().GetSampleCount(); want != got {
t.Errorf("want %d observations for 'bar' histogram, got %d", want, got)
}
timedFunc()
m.Reset()
his.WithLabelValues("foo").(Histogram).Write(m)
if want, got := uint64(1), m.GetHistogram().GetSampleCount(); want != got {
t.Errorf("want %d observations for 'foo' histogram, got %d", want, got)
}
m.Reset()
his.WithLabelValues("bar").(Histogram).Write(m)
if want, got := uint64(1), m.GetHistogram().GetSampleCount(); want != got {
t.Errorf("want %d observations for 'bar' histogram, got %d", want, got)
}
timedFunc()
m.Reset()
his.WithLabelValues("foo").(Histogram).Write(m)
if want, got := uint64(1), m.GetHistogram().GetSampleCount(); want != got {
t.Errorf("want %d observations for 'foo' histogram, got %d", want, got)
}
m.Reset()
his.WithLabelValues("bar").(Histogram).Write(m)
if want, got := uint64(2), m.GetHistogram().GetSampleCount(); want != got {
t.Errorf("want %d observations for 'bar' histogram, got %d", want, got)
}
}

View File

@@ -1,43 +0,0 @@
package prometheus
import (
"fmt"
"testing"
)
func TestNewConstMetricInvalidLabelValues(t *testing.T) {
testCases := []struct {
desc string
labels Labels
}{
{
desc: "non utf8 label value",
labels: Labels{"a": "\xFF"},
},
{
desc: "not enough label values",
labels: Labels{},
},
{
desc: "too many label values",
labels: Labels{"a": "1", "b": "2"},
},
}
for _, test := range testCases {
metricDesc := NewDesc(
"sample_value",
"sample value",
[]string{"a"},
Labels{},
)
expectPanic(t, func() {
MustNewConstMetric(metricDesc, CounterValue, 0.3, "\xFF")
}, fmt.Sprintf("WithLabelValues: expected panic because: %s", test.desc))
if _, err := NewConstMetric(metricDesc, CounterValue, 0.3, "\xFF"); err == nil {
t.Errorf("NewConstMetric: expected error because: %s", test.desc)
}
}
}

View File

@@ -1,312 +0,0 @@
// Copyright 2014 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package prometheus
import (
"fmt"
"testing"
dto "github.com/prometheus/client_model/go"
)
func TestDelete(t *testing.T) {
vec := NewGaugeVec(
GaugeOpts{
Name: "test",
Help: "helpless",
},
[]string{"l1", "l2"},
)
testDelete(t, vec)
}
func TestDeleteWithCollisions(t *testing.T) {
vec := NewGaugeVec(
GaugeOpts{
Name: "test",
Help: "helpless",
},
[]string{"l1", "l2"},
)
vec.hashAdd = func(h uint64, s string) uint64 { return 1 }
vec.hashAddByte = func(h uint64, b byte) uint64 { return 1 }
testDelete(t, vec)
}
func testDelete(t *testing.T, vec *GaugeVec) {
if got, want := vec.Delete(Labels{"l1": "v1", "l2": "v2"}), false; got != want {
t.Errorf("got %v, want %v", got, want)
}
vec.With(Labels{"l1": "v1", "l2": "v2"}).(Gauge).Set(42)
if got, want := vec.Delete(Labels{"l1": "v1", "l2": "v2"}), true; got != want {
t.Errorf("got %v, want %v", got, want)
}
if got, want := vec.Delete(Labels{"l1": "v1", "l2": "v2"}), false; got != want {
t.Errorf("got %v, want %v", got, want)
}
vec.With(Labels{"l1": "v1", "l2": "v2"}).(Gauge).Set(42)
if got, want := vec.Delete(Labels{"l2": "v2", "l1": "v1"}), true; got != want {
t.Errorf("got %v, want %v", got, want)
}
if got, want := vec.Delete(Labels{"l2": "v2", "l1": "v1"}), false; got != want {
t.Errorf("got %v, want %v", got, want)
}
vec.With(Labels{"l1": "v1", "l2": "v2"}).(Gauge).Set(42)
if got, want := vec.Delete(Labels{"l2": "v1", "l1": "v2"}), false; got != want {
t.Errorf("got %v, want %v", got, want)
}
if got, want := vec.Delete(Labels{"l1": "v1"}), false; got != want {
t.Errorf("got %v, want %v", got, want)
}
}
func TestDeleteLabelValues(t *testing.T) {
vec := NewGaugeVec(
GaugeOpts{
Name: "test",
Help: "helpless",
},
[]string{"l1", "l2"},
)
testDeleteLabelValues(t, vec)
}
func TestDeleteLabelValuesWithCollisions(t *testing.T) {
vec := NewGaugeVec(
GaugeOpts{
Name: "test",
Help: "helpless",
},
[]string{"l1", "l2"},
)
vec.hashAdd = func(h uint64, s string) uint64 { return 1 }
vec.hashAddByte = func(h uint64, b byte) uint64 { return 1 }
testDeleteLabelValues(t, vec)
}
func testDeleteLabelValues(t *testing.T, vec *GaugeVec) {
if got, want := vec.DeleteLabelValues("v1", "v2"), false; got != want {
t.Errorf("got %v, want %v", got, want)
}
vec.With(Labels{"l1": "v1", "l2": "v2"}).(Gauge).Set(42)
vec.With(Labels{"l1": "v1", "l2": "v3"}).(Gauge).Set(42) // Add junk data for collision.
if got, want := vec.DeleteLabelValues("v1", "v2"), true; got != want {
t.Errorf("got %v, want %v", got, want)
}
if got, want := vec.DeleteLabelValues("v1", "v2"), false; got != want {
t.Errorf("got %v, want %v", got, want)
}
if got, want := vec.DeleteLabelValues("v1", "v3"), true; got != want {
t.Errorf("got %v, want %v", got, want)
}
vec.With(Labels{"l1": "v1", "l2": "v2"}).(Gauge).Set(42)
// Delete out of order.
if got, want := vec.DeleteLabelValues("v2", "v1"), false; got != want {
t.Errorf("got %v, want %v", got, want)
}
if got, want := vec.DeleteLabelValues("v1"), false; got != want {
t.Errorf("got %v, want %v", got, want)
}
}
func TestMetricVec(t *testing.T) {
vec := NewGaugeVec(
GaugeOpts{
Name: "test",
Help: "helpless",
},
[]string{"l1", "l2"},
)
testMetricVec(t, vec)
}
func TestMetricVecWithCollisions(t *testing.T) {
vec := NewGaugeVec(
GaugeOpts{
Name: "test",
Help: "helpless",
},
[]string{"l1", "l2"},
)
vec.hashAdd = func(h uint64, s string) uint64 { return 1 }
vec.hashAddByte = func(h uint64, b byte) uint64 { return 1 }
testMetricVec(t, vec)
}
func testMetricVec(t *testing.T, vec *GaugeVec) {
vec.Reset() // Actually test Reset now!
var pair [2]string
// Keep track of metrics.
expected := map[[2]string]int{}
for i := 0; i < 1000; i++ {
pair[0], pair[1] = fmt.Sprint(i%4), fmt.Sprint(i%5) // Varying combinations multiples.
expected[pair]++
vec.WithLabelValues(pair[0], pair[1]).Inc()
expected[[2]string{"v1", "v2"}]++
vec.WithLabelValues("v1", "v2").(Gauge).Inc()
}
var total int
for _, metrics := range vec.children {
for _, metric := range metrics {
total++
copy(pair[:], metric.values)
var metricOut dto.Metric
if err := metric.metric.Write(&metricOut); err != nil {
t.Fatal(err)
}
actual := *metricOut.Gauge.Value
var actualPair [2]string
for i, label := range metricOut.Label {
actualPair[i] = *label.Value
}
// Test output pair against metric.values to ensure we've selected
// the right one. We check this to ensure the below check means
// anything at all.
if actualPair != pair {
t.Fatalf("unexpected pair association in metric map: %v != %v", actualPair, pair)
}
if actual != float64(expected[pair]) {
t.Fatalf("incorrect counter value for %v: %v != %v", pair, actual, expected[pair])
}
}
}
if total != len(expected) {
t.Fatalf("unexpected number of metrics: %v != %v", total, len(expected))
}
vec.Reset()
if len(vec.children) > 0 {
t.Fatalf("reset failed")
}
}
func TestCounterVecEndToEndWithCollision(t *testing.T) {
vec := NewCounterVec(
CounterOpts{
Name: "test",
Help: "helpless",
},
[]string{"labelname"},
)
vec.WithLabelValues("77kepQFQ8Kl").Inc()
vec.WithLabelValues("!0IC=VloaY").Add(2)
m := &dto.Metric{}
if err := vec.WithLabelValues("77kepQFQ8Kl").Write(m); err != nil {
t.Fatal(err)
}
if got, want := m.GetLabel()[0].GetValue(), "77kepQFQ8Kl"; got != want {
t.Errorf("got label value %q, want %q", got, want)
}
if got, want := m.GetCounter().GetValue(), 1.; got != want {
t.Errorf("got value %f, want %f", got, want)
}
m.Reset()
if err := vec.WithLabelValues("!0IC=VloaY").Write(m); err != nil {
t.Fatal(err)
}
if got, want := m.GetLabel()[0].GetValue(), "!0IC=VloaY"; got != want {
t.Errorf("got label value %q, want %q", got, want)
}
if got, want := m.GetCounter().GetValue(), 2.; got != want {
t.Errorf("got value %f, want %f", got, want)
}
}
func BenchmarkMetricVecWithLabelValuesBasic(b *testing.B) {
benchmarkMetricVecWithLabelValues(b, map[string][]string{
"l1": {"onevalue"},
"l2": {"twovalue"},
})
}
func BenchmarkMetricVecWithLabelValues2Keys10ValueCardinality(b *testing.B) {
benchmarkMetricVecWithLabelValuesCardinality(b, 2, 10)
}
func BenchmarkMetricVecWithLabelValues4Keys10ValueCardinality(b *testing.B) {
benchmarkMetricVecWithLabelValuesCardinality(b, 4, 10)
}
func BenchmarkMetricVecWithLabelValues2Keys100ValueCardinality(b *testing.B) {
benchmarkMetricVecWithLabelValuesCardinality(b, 2, 100)
}
func BenchmarkMetricVecWithLabelValues10Keys100ValueCardinality(b *testing.B) {
benchmarkMetricVecWithLabelValuesCardinality(b, 10, 100)
}
func BenchmarkMetricVecWithLabelValues10Keys1000ValueCardinality(b *testing.B) {
benchmarkMetricVecWithLabelValuesCardinality(b, 10, 1000)
}
func benchmarkMetricVecWithLabelValuesCardinality(b *testing.B, nkeys, nvalues int) {
labels := map[string][]string{}
for i := 0; i < nkeys; i++ {
var (
k = fmt.Sprintf("key-%v", i)
vs = make([]string, 0, nvalues)
)
for j := 0; j < nvalues; j++ {
vs = append(vs, fmt.Sprintf("value-%v", j))
}
labels[k] = vs
}
benchmarkMetricVecWithLabelValues(b, labels)
}
func benchmarkMetricVecWithLabelValues(b *testing.B, labels map[string][]string) {
var keys []string
for k := range labels { // Map order dependent, who cares though.
keys = append(keys, k)
}
values := make([]string, len(labels)) // Value cache for permutations.
vec := NewGaugeVec(
GaugeOpts{
Name: "test",
Help: "helpless",
},
keys,
)
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
// Varies input across provide map entries based on key size.
for j, k := range keys {
candidates := labels[k]
values[j] = candidates[i%len(candidates)]
}
vec.WithLabelValues(values...)
}
}

View File

@@ -1 +0,0 @@
target/

View File

@@ -1,18 +0,0 @@
# Contributing
Prometheus uses GitHub to manage reviews of pull requests.
* If you have a trivial fix or improvement, go ahead and create a pull request,
addressing (with `@...`) the maintainer of this repository (see
[MAINTAINERS.md](MAINTAINERS.md)) in the description of the pull request.
* If you plan to do something more involved, first discuss your ideas
on our [mailing list](https://groups.google.com/forum/?fromgroups#!forum/prometheus-developers).
This will avoid unnecessary work and surely give you and us a good deal
of inspiration.
* Relevant coding style guidelines are the [Go Code Review
Comments](https://code.google.com/p/go-wiki/wiki/CodeReviewComments)
and the _Formatting and style_ section of Peter Bourgon's [Go: Best
Practices for Production
Environments](http://peter.bourgon.org/go-in-production/#formatting-and-style).

View File

@@ -1 +0,0 @@
* Björn Rabenstein <beorn@soundcloud.com>

View File

@@ -1,62 +0,0 @@
# Copyright 2013 Prometheus Team
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
KEY_ID ?= _DEFINE_ME_
all: cpp go java python ruby
SUFFIXES:
cpp: cpp/metrics.pb.cc cpp/metrics.pb.h
cpp/metrics.pb.cc: metrics.proto
protoc $< --cpp_out=cpp/
cpp/metrics.pb.h: metrics.proto
protoc $< --cpp_out=cpp/
go: go/metrics.pb.go
go/metrics.pb.go: metrics.proto
protoc $< --go_out=go/
java: src/main/java/io/prometheus/client/Metrics.java pom.xml
mvn clean compile package
src/main/java/io/prometheus/client/Metrics.java: metrics.proto
protoc $< --java_out=src/main/java
python: python/prometheus/client/model/metrics_pb2.py
python/prometheus/client/model/metrics_pb2.py: metrics.proto
mkdir -p python/prometheus/client/model
protoc $< --python_out=python/prometheus/client/model
ruby:
$(MAKE) -C ruby build
clean:
-rm -rf cpp/*
-rm -rf go/*
-rm -rf java/*
-rm -rf python/*
-$(MAKE) -C ruby clean
-mvn clean
maven-deploy-snapshot: java
mvn clean deploy -Dgpg.keyname=$(KEY_ID) -DperformRelease=true
maven-deploy-release: java
mvn clean release:clean release:prepare release:perform -Dgpg.keyname=$(KEY_ID) -DperformRelease=true
.PHONY: all clean cpp go java maven-deploy-snapshot maven-deploy-release python ruby

View File

@@ -1,26 +0,0 @@
# Background
Under most circumstances, manually downloading this repository should never
be required.
# Prerequisites
# Base
* [Google Protocol Buffers](https://developers.google.com/protocol-buffers)
## Java
* [Apache Maven](http://maven.apache.org)
* [Prometheus Maven Repository](https://github.com/prometheus/io.prometheus-maven-repository) checked out into ../io.prometheus-maven-repository
## Go
* [Go](http://golang.org)
* [goprotobuf](https://code.google.com/p/goprotobuf)
## Ruby
* [Ruby](https://www.ruby-lang.org)
* [bundler](https://rubygems.org/gems/bundler)
# Building
$ make
# Getting Started
* The Go source code is periodically indexed: [Go Protocol Buffer Model](http://godoc.org/github.com/prometheus/client_model/go).
* All of the core developers are accessible via the [Prometheus Developers Mailinglist](https://groups.google.com/forum/?fromgroups#!forum/prometheus-developers).

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,81 +0,0 @@
// Copyright 2013 Prometheus Team
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
syntax = "proto2";
package io.prometheus.client;
option java_package = "io.prometheus.client";
message LabelPair {
optional string name = 1;
optional string value = 2;
}
enum MetricType {
COUNTER = 0;
GAUGE = 1;
SUMMARY = 2;
UNTYPED = 3;
HISTOGRAM = 4;
}
message Gauge {
optional double value = 1;
}
message Counter {
optional double value = 1;
}
message Quantile {
optional double quantile = 1;
optional double value = 2;
}
message Summary {
optional uint64 sample_count = 1;
optional double sample_sum = 2;
repeated Quantile quantile = 3;
}
message Untyped {
optional double value = 1;
}
message Histogram {
optional uint64 sample_count = 1;
optional double sample_sum = 2;
repeated Bucket bucket = 3; // Ordered in increasing order of upper_bound, +Inf bucket is optional.
}
message Bucket {
optional uint64 cumulative_count = 1; // Cumulative in increasing order.
optional double upper_bound = 2; // Inclusive.
}
message Metric {
repeated LabelPair label = 1;
optional Gauge gauge = 2;
optional Counter counter = 3;
optional Summary summary = 4;
optional Untyped untyped = 5;
optional Histogram histogram = 7;
optional int64 timestamp_ms = 6;
}
message MetricFamily {
optional string name = 1;
optional string help = 2;
optional MetricType type = 3;
repeated Metric metric = 4;
}

View File

@@ -1,130 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>io.prometheus.client</groupId>
<artifactId>model</artifactId>
<version>0.0.3-SNAPSHOT</version>
<parent>
<groupId>org.sonatype.oss</groupId>
<artifactId>oss-parent</artifactId>
<version>7</version>
</parent>
<name>Prometheus Client Data Model</name>
<url>http://github.com/prometheus/client_model</url>
<description>
Prometheus Client Data Model: Generated Protocol Buffer Assets
</description>
<licenses>
<license>
<name>The Apache Software License, Version 2.0</name>
<url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>
<distribution>repo</distribution>
</license>
</licenses>
<scm>
<connection>scm:git:git@github.com:prometheus/client_model.git</connection>
<developerConnection>scm:git:git@github.com:prometheus/client_model.git</developerConnection>
<url>git@github.com:prometheus/client_model.git</url>
</scm>
<developers>
<developer>
<id>mtp</id>
<name>Matt T. Proud</name>
<email>matt.proud@gmail.com</email>
</developer>
</developers>
<dependencies>
<dependency>
<groupId>com.google.protobuf</groupId>
<artifactId>protobuf-java</artifactId>
<version>2.5.0</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-javadoc-plugin</artifactId>
<version>2.8</version>
<configuration>
<encoding>UTF-8</encoding>
<docencoding>UTF-8</docencoding>
<linksource>true</linksource>
</configuration>
<executions>
<execution>
<id>generate-javadoc-site-report</id>
<phase>site</phase>
<goals>
<goal>javadoc</goal>
</goals>
</execution>
<execution>
<id>attach-javadocs</id>
<goals>
<goal>jar</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.6</source>
<target>1.6</target>
</configuration>
<version>3.1</version>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-source-plugin</artifactId>
<version>2.2.1</version>
<executions>
<execution>
<id>attach-sources</id>
<goals>
<goal>jar</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
<profiles>
<profile>
<id>release-sign-artifacts</id>
<activation>
<property>
<name>performRelease</name>
<value>true</value>
</property>
</activation>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-gpg-plugin</artifactId>
<version>1.4</version>
<executions>
<execution>
<id>sign-artifacts</id>
<phase>verify</phase>
<goals>
<goal>sign</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
</profiles>
</project>

View File

@@ -1,12 +0,0 @@
# Copyright 2013 Prometheus Team
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

View File

@@ -1,12 +0,0 @@
# Copyright 2013 Prometheus Team
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

View File

@@ -1,14 +0,0 @@
# Copyright 2013 Prometheus Team
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
__all__ = ['metrics_pb2']

View File

@@ -1,575 +0,0 @@
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: metrics.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf.internal import enum_type_wrapper
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
from google.protobuf import descriptor_pb2
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor.FileDescriptor(
name='metrics.proto',
package='io.prometheus.client',
serialized_pb=_b('\n\rmetrics.proto\x12\x14io.prometheus.client\"(\n\tLabelPair\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\t\"\x16\n\x05Gauge\x12\r\n\x05value\x18\x01 \x01(\x01\"\x18\n\x07\x43ounter\x12\r\n\x05value\x18\x01 \x01(\x01\"+\n\x08Quantile\x12\x10\n\x08quantile\x18\x01 \x01(\x01\x12\r\n\x05value\x18\x02 \x01(\x01\"e\n\x07Summary\x12\x14\n\x0csample_count\x18\x01 \x01(\x04\x12\x12\n\nsample_sum\x18\x02 \x01(\x01\x12\x30\n\x08quantile\x18\x03 \x03(\x0b\x32\x1e.io.prometheus.client.Quantile\"\x18\n\x07Untyped\x12\r\n\x05value\x18\x01 \x01(\x01\"c\n\tHistogram\x12\x14\n\x0csample_count\x18\x01 \x01(\x04\x12\x12\n\nsample_sum\x18\x02 \x01(\x01\x12,\n\x06\x62ucket\x18\x03 \x03(\x0b\x32\x1c.io.prometheus.client.Bucket\"7\n\x06\x42ucket\x12\x18\n\x10\x63umulative_count\x18\x01 \x01(\x04\x12\x13\n\x0bupper_bound\x18\x02 \x01(\x01\"\xbe\x02\n\x06Metric\x12.\n\x05label\x18\x01 \x03(\x0b\x32\x1f.io.prometheus.client.LabelPair\x12*\n\x05gauge\x18\x02 \x01(\x0b\x32\x1b.io.prometheus.client.Gauge\x12.\n\x07\x63ounter\x18\x03 \x01(\x0b\x32\x1d.io.prometheus.client.Counter\x12.\n\x07summary\x18\x04 \x01(\x0b\x32\x1d.io.prometheus.client.Summary\x12.\n\x07untyped\x18\x05 \x01(\x0b\x32\x1d.io.prometheus.client.Untyped\x12\x32\n\thistogram\x18\x07 \x01(\x0b\x32\x1f.io.prometheus.client.Histogram\x12\x14\n\x0ctimestamp_ms\x18\x06 \x01(\x03\"\x88\x01\n\x0cMetricFamily\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0c\n\x04help\x18\x02 \x01(\t\x12.\n\x04type\x18\x03 \x01(\x0e\x32 .io.prometheus.client.MetricType\x12,\n\x06metric\x18\x04 \x03(\x0b\x32\x1c.io.prometheus.client.Metric*M\n\nMetricType\x12\x0b\n\x07\x43OUNTER\x10\x00\x12\t\n\x05GAUGE\x10\x01\x12\x0b\n\x07SUMMARY\x10\x02\x12\x0b\n\x07UNTYPED\x10\x03\x12\r\n\tHISTOGRAM\x10\x04\x42\x16\n\x14io.prometheus.client')
)
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
_METRICTYPE = _descriptor.EnumDescriptor(
name='MetricType',
full_name='io.prometheus.client.MetricType',
filename=None,
file=DESCRIPTOR,
values=[
_descriptor.EnumValueDescriptor(
name='COUNTER', index=0, number=0,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='GAUGE', index=1, number=1,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='SUMMARY', index=2, number=2,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='UNTYPED', index=3, number=3,
options=None,
type=None),
_descriptor.EnumValueDescriptor(
name='HISTOGRAM', index=4, number=4,
options=None,
type=None),
],
containing_type=None,
options=None,
serialized_start=923,
serialized_end=1000,
)
_sym_db.RegisterEnumDescriptor(_METRICTYPE)
MetricType = enum_type_wrapper.EnumTypeWrapper(_METRICTYPE)
COUNTER = 0
GAUGE = 1
SUMMARY = 2
UNTYPED = 3
HISTOGRAM = 4
_LABELPAIR = _descriptor.Descriptor(
name='LabelPair',
full_name='io.prometheus.client.LabelPair',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='name', full_name='io.prometheus.client.LabelPair.name', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='value', full_name='io.prometheus.client.LabelPair.value', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
extension_ranges=[],
oneofs=[
],
serialized_start=39,
serialized_end=79,
)
_GAUGE = _descriptor.Descriptor(
name='Gauge',
full_name='io.prometheus.client.Gauge',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='value', full_name='io.prometheus.client.Gauge.value', index=0,
number=1, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
extension_ranges=[],
oneofs=[
],
serialized_start=81,
serialized_end=103,
)
_COUNTER = _descriptor.Descriptor(
name='Counter',
full_name='io.prometheus.client.Counter',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='value', full_name='io.prometheus.client.Counter.value', index=0,
number=1, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
extension_ranges=[],
oneofs=[
],
serialized_start=105,
serialized_end=129,
)
_QUANTILE = _descriptor.Descriptor(
name='Quantile',
full_name='io.prometheus.client.Quantile',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='quantile', full_name='io.prometheus.client.Quantile.quantile', index=0,
number=1, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='value', full_name='io.prometheus.client.Quantile.value', index=1,
number=2, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
extension_ranges=[],
oneofs=[
],
serialized_start=131,
serialized_end=174,
)
_SUMMARY = _descriptor.Descriptor(
name='Summary',
full_name='io.prometheus.client.Summary',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='sample_count', full_name='io.prometheus.client.Summary.sample_count', index=0,
number=1, type=4, cpp_type=4, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='sample_sum', full_name='io.prometheus.client.Summary.sample_sum', index=1,
number=2, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='quantile', full_name='io.prometheus.client.Summary.quantile', index=2,
number=3, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
extension_ranges=[],
oneofs=[
],
serialized_start=176,
serialized_end=277,
)
_UNTYPED = _descriptor.Descriptor(
name='Untyped',
full_name='io.prometheus.client.Untyped',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='value', full_name='io.prometheus.client.Untyped.value', index=0,
number=1, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
extension_ranges=[],
oneofs=[
],
serialized_start=279,
serialized_end=303,
)
_HISTOGRAM = _descriptor.Descriptor(
name='Histogram',
full_name='io.prometheus.client.Histogram',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='sample_count', full_name='io.prometheus.client.Histogram.sample_count', index=0,
number=1, type=4, cpp_type=4, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='sample_sum', full_name='io.prometheus.client.Histogram.sample_sum', index=1,
number=2, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='bucket', full_name='io.prometheus.client.Histogram.bucket', index=2,
number=3, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
extension_ranges=[],
oneofs=[
],
serialized_start=305,
serialized_end=404,
)
_BUCKET = _descriptor.Descriptor(
name='Bucket',
full_name='io.prometheus.client.Bucket',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='cumulative_count', full_name='io.prometheus.client.Bucket.cumulative_count', index=0,
number=1, type=4, cpp_type=4, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='upper_bound', full_name='io.prometheus.client.Bucket.upper_bound', index=1,
number=2, type=1, cpp_type=5, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
extension_ranges=[],
oneofs=[
],
serialized_start=406,
serialized_end=461,
)
_METRIC = _descriptor.Descriptor(
name='Metric',
full_name='io.prometheus.client.Metric',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='label', full_name='io.prometheus.client.Metric.label', index=0,
number=1, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='gauge', full_name='io.prometheus.client.Metric.gauge', index=1,
number=2, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='counter', full_name='io.prometheus.client.Metric.counter', index=2,
number=3, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='summary', full_name='io.prometheus.client.Metric.summary', index=3,
number=4, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='untyped', full_name='io.prometheus.client.Metric.untyped', index=4,
number=5, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='histogram', full_name='io.prometheus.client.Metric.histogram', index=5,
number=7, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='timestamp_ms', full_name='io.prometheus.client.Metric.timestamp_ms', index=6,
number=6, type=3, cpp_type=2, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
extension_ranges=[],
oneofs=[
],
serialized_start=464,
serialized_end=782,
)
_METRICFAMILY = _descriptor.Descriptor(
name='MetricFamily',
full_name='io.prometheus.client.MetricFamily',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='name', full_name='io.prometheus.client.MetricFamily.name', index=0,
number=1, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='help', full_name='io.prometheus.client.MetricFamily.help', index=1,
number=2, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=_b("").decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='type', full_name='io.prometheus.client.MetricFamily.type', index=2,
number=3, type=14, cpp_type=8, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='metric', full_name='io.prometheus.client.MetricFamily.metric', index=3,
number=4, type=11, cpp_type=10, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
extension_ranges=[],
oneofs=[
],
serialized_start=785,
serialized_end=921,
)
_SUMMARY.fields_by_name['quantile'].message_type = _QUANTILE
_HISTOGRAM.fields_by_name['bucket'].message_type = _BUCKET
_METRIC.fields_by_name['label'].message_type = _LABELPAIR
_METRIC.fields_by_name['gauge'].message_type = _GAUGE
_METRIC.fields_by_name['counter'].message_type = _COUNTER
_METRIC.fields_by_name['summary'].message_type = _SUMMARY
_METRIC.fields_by_name['untyped'].message_type = _UNTYPED
_METRIC.fields_by_name['histogram'].message_type = _HISTOGRAM
_METRICFAMILY.fields_by_name['type'].enum_type = _METRICTYPE
_METRICFAMILY.fields_by_name['metric'].message_type = _METRIC
DESCRIPTOR.message_types_by_name['LabelPair'] = _LABELPAIR
DESCRIPTOR.message_types_by_name['Gauge'] = _GAUGE
DESCRIPTOR.message_types_by_name['Counter'] = _COUNTER
DESCRIPTOR.message_types_by_name['Quantile'] = _QUANTILE
DESCRIPTOR.message_types_by_name['Summary'] = _SUMMARY
DESCRIPTOR.message_types_by_name['Untyped'] = _UNTYPED
DESCRIPTOR.message_types_by_name['Histogram'] = _HISTOGRAM
DESCRIPTOR.message_types_by_name['Bucket'] = _BUCKET
DESCRIPTOR.message_types_by_name['Metric'] = _METRIC
DESCRIPTOR.message_types_by_name['MetricFamily'] = _METRICFAMILY
DESCRIPTOR.enum_types_by_name['MetricType'] = _METRICTYPE
LabelPair = _reflection.GeneratedProtocolMessageType('LabelPair', (_message.Message,), dict(
DESCRIPTOR = _LABELPAIR,
__module__ = 'metrics_pb2'
# @@protoc_insertion_point(class_scope:io.prometheus.client.LabelPair)
))
_sym_db.RegisterMessage(LabelPair)
Gauge = _reflection.GeneratedProtocolMessageType('Gauge', (_message.Message,), dict(
DESCRIPTOR = _GAUGE,
__module__ = 'metrics_pb2'
# @@protoc_insertion_point(class_scope:io.prometheus.client.Gauge)
))
_sym_db.RegisterMessage(Gauge)
Counter = _reflection.GeneratedProtocolMessageType('Counter', (_message.Message,), dict(
DESCRIPTOR = _COUNTER,
__module__ = 'metrics_pb2'
# @@protoc_insertion_point(class_scope:io.prometheus.client.Counter)
))
_sym_db.RegisterMessage(Counter)
Quantile = _reflection.GeneratedProtocolMessageType('Quantile', (_message.Message,), dict(
DESCRIPTOR = _QUANTILE,
__module__ = 'metrics_pb2'
# @@protoc_insertion_point(class_scope:io.prometheus.client.Quantile)
))
_sym_db.RegisterMessage(Quantile)
Summary = _reflection.GeneratedProtocolMessageType('Summary', (_message.Message,), dict(
DESCRIPTOR = _SUMMARY,
__module__ = 'metrics_pb2'
# @@protoc_insertion_point(class_scope:io.prometheus.client.Summary)
))
_sym_db.RegisterMessage(Summary)
Untyped = _reflection.GeneratedProtocolMessageType('Untyped', (_message.Message,), dict(
DESCRIPTOR = _UNTYPED,
__module__ = 'metrics_pb2'
# @@protoc_insertion_point(class_scope:io.prometheus.client.Untyped)
))
_sym_db.RegisterMessage(Untyped)
Histogram = _reflection.GeneratedProtocolMessageType('Histogram', (_message.Message,), dict(
DESCRIPTOR = _HISTOGRAM,
__module__ = 'metrics_pb2'
# @@protoc_insertion_point(class_scope:io.prometheus.client.Histogram)
))
_sym_db.RegisterMessage(Histogram)
Bucket = _reflection.GeneratedProtocolMessageType('Bucket', (_message.Message,), dict(
DESCRIPTOR = _BUCKET,
__module__ = 'metrics_pb2'
# @@protoc_insertion_point(class_scope:io.prometheus.client.Bucket)
))
_sym_db.RegisterMessage(Bucket)
Metric = _reflection.GeneratedProtocolMessageType('Metric', (_message.Message,), dict(
DESCRIPTOR = _METRIC,
__module__ = 'metrics_pb2'
# @@protoc_insertion_point(class_scope:io.prometheus.client.Metric)
))
_sym_db.RegisterMessage(Metric)
MetricFamily = _reflection.GeneratedProtocolMessageType('MetricFamily', (_message.Message,), dict(
DESCRIPTOR = _METRICFAMILY,
__module__ = 'metrics_pb2'
# @@protoc_insertion_point(class_scope:io.prometheus.client.MetricFamily)
))
_sym_db.RegisterMessage(MetricFamily)
DESCRIPTOR.has_options = True
DESCRIPTOR._options = _descriptor._ParseOptions(descriptor_pb2.FileOptions(), _b('\n\024io.prometheus.client'))
# @@protoc_insertion_point(module_scope)

View File

@@ -1,5 +0,0 @@
*.gem
.bundle
Gemfile.lock
pkg
vendor/bundle

View File

@@ -1,4 +0,0 @@
source 'https://rubygems.org'
# Specify your gem's dependencies in prometheus-client-model.gemspec
gemspec

View File

@@ -1,17 +0,0 @@
VENDOR_BUNDLE = vendor/bundle
build: $(VENDOR_BUNDLE)/.bundled
BEEFCAKE_NAMESPACE=Prometheus::Client protoc --beefcake_out lib/prometheus/client/model -I .. ../metrics.proto
$(VENDOR_BUNDLE):
mkdir -p $@
$(VENDOR_BUNDLE)/.bundled: $(VENDOR_BUNDLE) Gemfile
bundle install --quiet --path $<
@touch $@
clean:
-rm -f lib/prometheus/client/model/metrics.pb.rb
-rm -rf $(VENDOR_BUNDLE)
.PHONY: build clean

View File

@@ -1,31 +0,0 @@
# Prometheus Ruby client model
Data model artifacts for the [Prometheus Ruby client][1].
## Installation
gem install prometheus-client-model
## Usage
Build the artifacts from the protobuf specification:
make build
While this Gem's main purpose is to define the Prometheus data types for the
[client][1], it's possible to use it without the client to decode a stream of
delimited protobuf messages:
```ruby
require 'open-uri'
require 'prometheus/client/model'
CONTENT_TYPE = 'application/vnd.google.protobuf; proto=io.prometheus.client.MetricFamily; encoding=delimited'
stream = open('http://localhost:9090/metrics', 'Accept' => CONTENT_TYPE).read
while family = Prometheus::Client::MetricFamily.read_delimited(stream)
puts family
end
```
[1]: https://github.com/prometheus/client_ruby

View File

@@ -1 +0,0 @@
require "bundler/gem_tasks"

View File

@@ -1,2 +0,0 @@
require 'prometheus/client/model/metrics.pb'
require 'prometheus/client/model/version'

View File

@@ -1,111 +0,0 @@
## Generated from metrics.proto for io.prometheus.client
require "beefcake"
module Prometheus
module Client
module MetricType
COUNTER = 0
GAUGE = 1
SUMMARY = 2
UNTYPED = 3
HISTOGRAM = 4
end
class LabelPair
include Beefcake::Message
end
class Gauge
include Beefcake::Message
end
class Counter
include Beefcake::Message
end
class Quantile
include Beefcake::Message
end
class Summary
include Beefcake::Message
end
class Untyped
include Beefcake::Message
end
class Histogram
include Beefcake::Message
end
class Bucket
include Beefcake::Message
end
class Metric
include Beefcake::Message
end
class MetricFamily
include Beefcake::Message
end
class LabelPair
optional :name, :string, 1
optional :value, :string, 2
end
class Gauge
optional :value, :double, 1
end
class Counter
optional :value, :double, 1
end
class Quantile
optional :quantile, :double, 1
optional :value, :double, 2
end
class Summary
optional :sample_count, :uint64, 1
optional :sample_sum, :double, 2
repeated :quantile, Quantile, 3
end
class Untyped
optional :value, :double, 1
end
class Histogram
optional :sample_count, :uint64, 1
optional :sample_sum, :double, 2
repeated :bucket, Bucket, 3
end
class Bucket
optional :cumulative_count, :uint64, 1
optional :upper_bound, :double, 2
end
class Metric
repeated :label, LabelPair, 1
optional :gauge, Gauge, 2
optional :counter, Counter, 3
optional :summary, Summary, 4
optional :untyped, Untyped, 5
optional :histogram, Histogram, 7
optional :timestamp_ms, :int64, 6
end
class MetricFamily
optional :name, :string, 1
optional :help, :string, 2
optional :type, MetricType, 3
repeated :metric, Metric, 4
end
end
end

View File

@@ -1,7 +0,0 @@
module Prometheus
module Client
module Model
VERSION = '0.1.0'
end
end
end

View File

@@ -1,22 +0,0 @@
# coding: utf-8
lib = File.expand_path('../lib', __FILE__)
$LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
require 'prometheus/client/model/version'
Gem::Specification.new do |spec|
spec.name = 'prometheus-client-model'
spec.version = Prometheus::Client::Model::VERSION
spec.authors = ['Tobias Schmidt']
spec.email = ['tobidt@gmail.com']
spec.summary = 'Data model artifacts for the Prometheus Ruby client'
spec.homepage = 'https://github.com/prometheus/client_model/tree/master/ruby'
spec.license = 'Apache 2.0'
spec.files = %w[README.md LICENSE] + Dir.glob('{lib/**/*}')
spec.require_paths = ['lib']
spec.add_dependency 'beefcake', '>= 0.4.0'
spec.add_development_dependency 'bundler', '~> 1.3'
spec.add_development_dependency 'rake'
end

View File

@@ -1,23 +0,0 @@
#!/usr/bin/python
from setuptools import setup
setup(
name = 'prometheus_client_model',
version = '0.0.1',
author = 'Matt T. Proud',
author_email = 'matt.proud@gmail.com',
description = 'Data model artifacts for the Prometheus client.',
license = 'Apache License 2.0',
url = 'http://github.com/prometheus/client_model',
packages = ['prometheus', 'prometheus/client', 'prometheus/client/model'],
package_dir = {'': 'python'},
requires = ['protobuf(==2.4.1)'],
platforms = 'Platform Independent',
classifiers = ['Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'Intended Audience :: System Administrators',
'License :: OSI Approved :: Apache Software License',
'Operating System :: OS Independent',
'Topic :: Software Development :: Testing',
'Topic :: System :: Monitoring'])

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +0,0 @@
sudo: false
language: go
go:
- 1.7.5
- tip

View File

@@ -1,18 +0,0 @@
# Contributing
Prometheus uses GitHub to manage reviews of pull requests.
* If you have a trivial fix or improvement, go ahead and create a pull request,
addressing (with `@...`) the maintainer of this repository (see
[MAINTAINERS.md](MAINTAINERS.md)) in the description of the pull request.
* If you plan to do something more involved, first discuss your ideas
on our [mailing list](https://groups.google.com/forum/?fromgroups#!forum/prometheus-developers).
This will avoid unnecessary work and surely give you and us a good deal
of inspiration.
* Relevant coding style guidelines are the [Go Code Review
Comments](https://code.google.com/p/go-wiki/wiki/CodeReviewComments)
and the _Formatting and style_ section of Peter Bourgon's [Go: Best
Practices for Production
Environments](http://peter.bourgon.org/go-in-production/#formatting-and-style).

View File

@@ -1 +0,0 @@
* Fabian Reinartz <fabian.reinartz@coreos.com>

View File

@@ -1,12 +0,0 @@
# Common
[![Build Status](https://travis-ci.org/prometheus/common.svg)](https://travis-ci.org/prometheus/common)
This repository contains Go libraries that are shared across Prometheus
components and libraries.
* **config**: Common configuration structures
* **expfmt**: Decoding and encoding for the exposition format
* **log**: A logging wrapper around [logrus](https://github.com/sirupsen/logrus)
* **model**: Shared data structures
* **route**: A routing wrapper around [httprouter](https://github.com/julienschmidt/httprouter) using `context.Context`
* **version**: Version information and metrics

View File

@@ -1,34 +0,0 @@
// Copyright 2016 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// This package no longer handles safe yaml parsing. In order to
// ensure correct yaml unmarshalling, use "yaml.UnmarshalStrict()".
package config
// Secret special type for storing secrets.
type Secret string
// MarshalYAML implements the yaml.Marshaler interface for Secrets.
func (s Secret) MarshalYAML() (interface{}, error) {
if s != "" {
return "<secret>", nil
}
return nil, nil
}
//UnmarshalYAML implements the yaml.Unmarshaler interface for Secrets.
func (s *Secret) UnmarshalYAML(unmarshal func(interface{}) error) error {
type plain Secret
return unmarshal((*plain)(s))
}

View File

@@ -1,317 +0,0 @@
// Copyright 2016 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package config
import (
"crypto/tls"
"crypto/x509"
"fmt"
"io/ioutil"
"net/http"
"net/url"
"strings"
"time"
"github.com/mwitkow/go-conntrack"
"gopkg.in/yaml.v2"
)
// BasicAuth contains basic HTTP authentication credentials.
type BasicAuth struct {
Username string `yaml:"username"`
Password Secret `yaml:"password,omitempty"`
PasswordFile string `yaml:"password_file,omitempty"`
}
// URL is a custom URL type that allows validation at configuration load time.
type URL struct {
*url.URL
}
// UnmarshalYAML implements the yaml.Unmarshaler interface for URLs.
func (u *URL) UnmarshalYAML(unmarshal func(interface{}) error) error {
var s string
if err := unmarshal(&s); err != nil {
return err
}
urlp, err := url.Parse(s)
if err != nil {
return err
}
u.URL = urlp
return nil
}
// MarshalYAML implements the yaml.Marshaler interface for URLs.
func (u URL) MarshalYAML() (interface{}, error) {
if u.URL != nil {
return u.String(), nil
}
return nil, nil
}
// HTTPClientConfig configures an HTTP client.
type HTTPClientConfig struct {
// The HTTP basic authentication credentials for the targets.
BasicAuth *BasicAuth `yaml:"basic_auth,omitempty"`
// The bearer token for the targets.
BearerToken Secret `yaml:"bearer_token,omitempty"`
// The bearer token file for the targets.
BearerTokenFile string `yaml:"bearer_token_file,omitempty"`
// HTTP proxy server to use to connect to the targets.
ProxyURL URL `yaml:"proxy_url,omitempty"`
// TLSConfig to use to connect to the targets.
TLSConfig TLSConfig `yaml:"tls_config,omitempty"`
}
// Validate validates the HTTPClientConfig to check only one of BearerToken,
// BasicAuth and BearerTokenFile is configured.
func (c *HTTPClientConfig) Validate() error {
if len(c.BearerToken) > 0 && len(c.BearerTokenFile) > 0 {
return fmt.Errorf("at most one of bearer_token & bearer_token_file must be configured")
}
if c.BasicAuth != nil && (len(c.BearerToken) > 0 || len(c.BearerTokenFile) > 0) {
return fmt.Errorf("at most one of basic_auth, bearer_token & bearer_token_file must be configured")
}
if c.BasicAuth != nil && (string(c.BasicAuth.Password) != "" && c.BasicAuth.PasswordFile != "") {
return fmt.Errorf("at most one of basic_auth password & password_file must be configured")
}
return nil
}
// UnmarshalYAML implements the yaml.Unmarshaler interface
func (c *HTTPClientConfig) UnmarshalYAML(unmarshal func(interface{}) error) error {
type plain HTTPClientConfig
if err := unmarshal((*plain)(c)); err != nil {
return err
}
return c.Validate()
}
// UnmarshalYAML implements the yaml.Unmarshaler interface.
func (a *BasicAuth) UnmarshalYAML(unmarshal func(interface{}) error) error {
type plain BasicAuth
return unmarshal((*plain)(a))
}
// NewClient returns a http.Client using the specified http.RoundTripper.
func newClient(rt http.RoundTripper) *http.Client {
return &http.Client{Transport: rt}
}
// NewClientFromConfig returns a new HTTP client configured for the
// given config.HTTPClientConfig. The name is used as go-conntrack metric label.
func NewClientFromConfig(cfg HTTPClientConfig, name string) (*http.Client, error) {
rt, err := NewRoundTripperFromConfig(cfg, name)
if err != nil {
return nil, err
}
return newClient(rt), nil
}
// NewRoundTripperFromConfig returns a new HTTP RoundTripper configured for the
// given config.HTTPClientConfig. The name is used as go-conntrack metric label.
func NewRoundTripperFromConfig(cfg HTTPClientConfig, name string) (http.RoundTripper, error) {
tlsConfig, err := NewTLSConfig(&cfg.TLSConfig)
if err != nil {
return nil, err
}
// The only timeout we care about is the configured scrape timeout.
// It is applied on request. So we leave out any timings here.
var rt http.RoundTripper = &http.Transport{
Proxy: http.ProxyURL(cfg.ProxyURL.URL),
MaxIdleConns: 20000,
MaxIdleConnsPerHost: 1000, // see https://github.com/golang/go/issues/13801
DisableKeepAlives: false,
TLSClientConfig: tlsConfig,
DisableCompression: true,
// 5 minutes is typically above the maximum sane scrape interval. So we can
// use keepalive for all configurations.
IdleConnTimeout: 5 * time.Minute,
DialContext: conntrack.NewDialContextFunc(
conntrack.DialWithTracing(),
conntrack.DialWithName(name),
),
}
// If a bearer token is provided, create a round tripper that will set the
// Authorization header correctly on each request.
if len(cfg.BearerToken) > 0 {
rt = NewBearerAuthRoundTripper(cfg.BearerToken, rt)
} else if len(cfg.BearerTokenFile) > 0 {
rt = NewBearerAuthFileRoundTripper(cfg.BearerTokenFile, rt)
}
if cfg.BasicAuth != nil {
rt = NewBasicAuthRoundTripper(cfg.BasicAuth.Username, cfg.BasicAuth.Password, cfg.BasicAuth.PasswordFile, rt)
}
// Return a new configured RoundTripper.
return rt, nil
}
type bearerAuthRoundTripper struct {
bearerToken Secret
rt http.RoundTripper
}
// NewBearerAuthRoundTripper adds the provided bearer token to a request unless the authorization
// header has already been set.
func NewBearerAuthRoundTripper(token Secret, rt http.RoundTripper) http.RoundTripper {
return &bearerAuthRoundTripper{token, rt}
}
func (rt *bearerAuthRoundTripper) RoundTrip(req *http.Request) (*http.Response, error) {
if len(req.Header.Get("Authorization")) == 0 {
req = cloneRequest(req)
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", string(rt.bearerToken)))
}
return rt.rt.RoundTrip(req)
}
type bearerAuthFileRoundTripper struct {
bearerFile string
rt http.RoundTripper
}
// NewBearerAuthFileRoundTripper adds the bearer token read from the provided file to a request unless
// the authorization header has already been set. This file is read for every request.
func NewBearerAuthFileRoundTripper(bearerFile string, rt http.RoundTripper) http.RoundTripper {
return &bearerAuthFileRoundTripper{bearerFile, rt}
}
func (rt *bearerAuthFileRoundTripper) RoundTrip(req *http.Request) (*http.Response, error) {
if len(req.Header.Get("Authorization")) == 0 {
b, err := ioutil.ReadFile(rt.bearerFile)
if err != nil {
return nil, fmt.Errorf("unable to read bearer token file %s: %s", rt.bearerFile, err)
}
bearerToken := strings.TrimSpace(string(b))
req = cloneRequest(req)
req.Header.Set("Authorization", "Bearer "+bearerToken)
}
return rt.rt.RoundTrip(req)
}
type basicAuthRoundTripper struct {
username string
password Secret
passwordFile string
rt http.RoundTripper
}
// NewBasicAuthRoundTripper will apply a BASIC auth authorization header to a request unless it has
// already been set.
func NewBasicAuthRoundTripper(username string, password Secret, passwordFile string, rt http.RoundTripper) http.RoundTripper {
return &basicAuthRoundTripper{username, password, passwordFile, rt}
}
func (rt *basicAuthRoundTripper) RoundTrip(req *http.Request) (*http.Response, error) {
if len(req.Header.Get("Authorization")) != 0 {
return rt.rt.RoundTrip(req)
}
req = cloneRequest(req)
if rt.passwordFile != "" {
bs, err := ioutil.ReadFile(rt.passwordFile)
if err != nil {
return nil, fmt.Errorf("unable to read basic auth password file %s: %s", rt.passwordFile, err)
}
req.SetBasicAuth(rt.username, strings.TrimSpace(string(bs)))
} else {
req.SetBasicAuth(rt.username, strings.TrimSpace(string(rt.password)))
}
return rt.rt.RoundTrip(req)
}
// cloneRequest returns a clone of the provided *http.Request.
// The clone is a shallow copy of the struct and its Header map.
func cloneRequest(r *http.Request) *http.Request {
// Shallow copy of the struct.
r2 := new(http.Request)
*r2 = *r
// Deep copy of the Header.
r2.Header = make(http.Header)
for k, s := range r.Header {
r2.Header[k] = s
}
return r2
}
// NewTLSConfig creates a new tls.Config from the given TLSConfig.
func NewTLSConfig(cfg *TLSConfig) (*tls.Config, error) {
tlsConfig := &tls.Config{InsecureSkipVerify: cfg.InsecureSkipVerify}
// If a CA cert is provided then let's read it in so we can validate the
// scrape target's certificate properly.
if len(cfg.CAFile) > 0 {
caCertPool := x509.NewCertPool()
// Load CA cert.
caCert, err := ioutil.ReadFile(cfg.CAFile)
if err != nil {
return nil, fmt.Errorf("unable to use specified CA cert %s: %s", cfg.CAFile, err)
}
caCertPool.AppendCertsFromPEM(caCert)
tlsConfig.RootCAs = caCertPool
}
if len(cfg.ServerName) > 0 {
tlsConfig.ServerName = cfg.ServerName
}
// If a client cert & key is provided then configure TLS config accordingly.
if len(cfg.CertFile) > 0 && len(cfg.KeyFile) == 0 {
return nil, fmt.Errorf("client cert file %q specified without client key file", cfg.CertFile)
} else if len(cfg.KeyFile) > 0 && len(cfg.CertFile) == 0 {
return nil, fmt.Errorf("client key file %q specified without client cert file", cfg.KeyFile)
} else if len(cfg.CertFile) > 0 && len(cfg.KeyFile) > 0 {
cert, err := tls.LoadX509KeyPair(cfg.CertFile, cfg.KeyFile)
if err != nil {
return nil, fmt.Errorf("unable to use specified client cert (%s) & key (%s): %s", cfg.CertFile, cfg.KeyFile, err)
}
tlsConfig.Certificates = []tls.Certificate{cert}
}
tlsConfig.BuildNameToCertificate()
return tlsConfig, nil
}
// TLSConfig configures the options for TLS connections.
type TLSConfig struct {
// The CA cert to use for the targets.
CAFile string `yaml:"ca_file,omitempty"`
// The client cert file for the targets.
CertFile string `yaml:"cert_file,omitempty"`
// The client key file for the targets.
KeyFile string `yaml:"key_file,omitempty"`
// Used to verify the hostname for the targets.
ServerName string `yaml:"server_name,omitempty"`
// Disable target certificate validation.
InsecureSkipVerify bool `yaml:"insecure_skip_verify"`
}
// UnmarshalYAML implements the yaml.Unmarshaler interface.
func (c *TLSConfig) UnmarshalYAML(unmarshal func(interface{}) error) error {
type plain TLSConfig
return unmarshal((*plain)(c))
}
func (c HTTPClientConfig) String() string {
b, err := yaml.Marshal(c)
if err != nil {
return fmt.Sprintf("<error creating http client config string: %s>", err)
}
return string(b)
}

View File

@@ -1,618 +0,0 @@
// Copyright 2015 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package config
import (
"crypto/tls"
"crypto/x509"
"fmt"
"io/ioutil"
"net/http"
"net/http/httptest"
"reflect"
"strings"
"testing"
"gopkg.in/yaml.v2"
)
const (
TLSCAChainPath = "testdata/tls-ca-chain.pem"
ServerCertificatePath = "testdata/server.crt"
ServerKeyPath = "testdata/server.key"
BarneyCertificatePath = "testdata/barney.crt"
BarneyKeyNoPassPath = "testdata/barney-no-pass.key"
MissingCA = "missing/ca.crt"
MissingCert = "missing/cert.crt"
MissingKey = "missing/secret.key"
ExpectedMessage = "I'm here to serve you!!!"
BearerToken = "theanswertothegreatquestionoflifetheuniverseandeverythingisfortytwo"
BearerTokenFile = "testdata/bearer.token"
MissingBearerTokenFile = "missing/bearer.token"
ExpectedBearer = "Bearer " + BearerToken
ExpectedUsername = "arthurdent"
ExpectedPassword = "42"
)
var invalidHTTPClientConfigs = []struct {
httpClientConfigFile string
errMsg string
}{
{
httpClientConfigFile: "testdata/http.conf.bearer-token-and-file-set.bad.yml",
errMsg: "at most one of bearer_token & bearer_token_file must be configured",
},
{
httpClientConfigFile: "testdata/http.conf.empty.bad.yml",
errMsg: "at most one of basic_auth, bearer_token & bearer_token_file must be configured",
},
{
httpClientConfigFile: "testdata/http.conf.basic-auth.too-much.bad.yaml",
errMsg: "at most one of basic_auth password & password_file must be configured",
},
}
func newTestServer(handler func(w http.ResponseWriter, r *http.Request)) (*httptest.Server, error) {
testServer := httptest.NewUnstartedServer(http.HandlerFunc(handler))
tlsCAChain, err := ioutil.ReadFile(TLSCAChainPath)
if err != nil {
return nil, fmt.Errorf("Can't read %s", TLSCAChainPath)
}
serverCertificate, err := tls.LoadX509KeyPair(ServerCertificatePath, ServerKeyPath)
if err != nil {
return nil, fmt.Errorf("Can't load X509 key pair %s - %s", ServerCertificatePath, ServerKeyPath)
}
rootCAs := x509.NewCertPool()
rootCAs.AppendCertsFromPEM(tlsCAChain)
testServer.TLS = &tls.Config{
Certificates: make([]tls.Certificate, 1),
RootCAs: rootCAs,
ClientAuth: tls.RequireAndVerifyClientCert,
ClientCAs: rootCAs}
testServer.TLS.Certificates[0] = serverCertificate
testServer.TLS.BuildNameToCertificate()
testServer.StartTLS()
return testServer, nil
}
func TestNewClientFromConfig(t *testing.T) {
var newClientValidConfig = []struct {
clientConfig HTTPClientConfig
handler func(w http.ResponseWriter, r *http.Request)
}{
{
clientConfig: HTTPClientConfig{
TLSConfig: TLSConfig{
CAFile: "",
CertFile: BarneyCertificatePath,
KeyFile: BarneyKeyNoPassPath,
ServerName: "",
InsecureSkipVerify: true},
},
handler: func(w http.ResponseWriter, r *http.Request) {
fmt.Fprint(w, ExpectedMessage)
},
}, {
clientConfig: HTTPClientConfig{
TLSConfig: TLSConfig{
CAFile: TLSCAChainPath,
CertFile: BarneyCertificatePath,
KeyFile: BarneyKeyNoPassPath,
ServerName: "",
InsecureSkipVerify: false},
},
handler: func(w http.ResponseWriter, r *http.Request) {
fmt.Fprint(w, ExpectedMessage)
},
}, {
clientConfig: HTTPClientConfig{
BearerToken: BearerToken,
TLSConfig: TLSConfig{
CAFile: TLSCAChainPath,
CertFile: BarneyCertificatePath,
KeyFile: BarneyKeyNoPassPath,
ServerName: "",
InsecureSkipVerify: false},
},
handler: func(w http.ResponseWriter, r *http.Request) {
bearer := r.Header.Get("Authorization")
if bearer != ExpectedBearer {
fmt.Fprintf(w, "The expected Bearer Authorization (%s) differs from the obtained Bearer Authorization (%s)",
ExpectedBearer, bearer)
} else {
fmt.Fprint(w, ExpectedMessage)
}
},
}, {
clientConfig: HTTPClientConfig{
BearerTokenFile: BearerTokenFile,
TLSConfig: TLSConfig{
CAFile: TLSCAChainPath,
CertFile: BarneyCertificatePath,
KeyFile: BarneyKeyNoPassPath,
ServerName: "",
InsecureSkipVerify: false},
},
handler: func(w http.ResponseWriter, r *http.Request) {
bearer := r.Header.Get("Authorization")
if bearer != ExpectedBearer {
fmt.Fprintf(w, "The expected Bearer Authorization (%s) differs from the obtained Bearer Authorization (%s)",
ExpectedBearer, bearer)
} else {
fmt.Fprint(w, ExpectedMessage)
}
},
}, {
clientConfig: HTTPClientConfig{
BasicAuth: &BasicAuth{
Username: ExpectedUsername,
Password: ExpectedPassword,
},
TLSConfig: TLSConfig{
CAFile: TLSCAChainPath,
CertFile: BarneyCertificatePath,
KeyFile: BarneyKeyNoPassPath,
ServerName: "",
InsecureSkipVerify: false},
},
handler: func(w http.ResponseWriter, r *http.Request) {
username, password, ok := r.BasicAuth()
if !ok {
fmt.Fprintf(w, "The Authorization header wasn't set")
} else if ExpectedUsername != username {
fmt.Fprintf(w, "The expected username (%s) differs from the obtained username (%s).", ExpectedUsername, username)
} else if ExpectedPassword != password {
fmt.Fprintf(w, "The expected password (%s) differs from the obtained password (%s).", ExpectedPassword, password)
} else {
fmt.Fprint(w, ExpectedMessage)
}
},
},
}
for _, validConfig := range newClientValidConfig {
testServer, err := newTestServer(validConfig.handler)
if err != nil {
t.Fatal(err.Error())
}
defer testServer.Close()
client, err := NewClientFromConfig(validConfig.clientConfig, "test")
if err != nil {
t.Errorf("Can't create a client from this config: %+v", validConfig.clientConfig)
continue
}
response, err := client.Get(testServer.URL)
if err != nil {
t.Errorf("Can't connect to the test server using this config: %+v", validConfig.clientConfig)
continue
}
message, err := ioutil.ReadAll(response.Body)
response.Body.Close()
if err != nil {
t.Errorf("Can't read the server response body using this config: %+v", validConfig.clientConfig)
continue
}
trimMessage := strings.TrimSpace(string(message))
if ExpectedMessage != trimMessage {
t.Errorf("The expected message (%s) differs from the obtained message (%s) using this config: %+v",
ExpectedMessage, trimMessage, validConfig.clientConfig)
}
}
}
func TestNewClientFromInvalidConfig(t *testing.T) {
var newClientInvalidConfig = []struct {
clientConfig HTTPClientConfig
errorMsg string
}{
{
clientConfig: HTTPClientConfig{
TLSConfig: TLSConfig{
CAFile: MissingCA,
CertFile: "",
KeyFile: "",
ServerName: "",
InsecureSkipVerify: true},
},
errorMsg: fmt.Sprintf("unable to use specified CA cert %s:", MissingCA),
},
}
for _, invalidConfig := range newClientInvalidConfig {
client, err := NewClientFromConfig(invalidConfig.clientConfig, "test")
if client != nil {
t.Errorf("A client instance was returned instead of nil using this config: %+v", invalidConfig.clientConfig)
}
if err == nil {
t.Errorf("No error was returned using this config: %+v", invalidConfig.clientConfig)
}
if !strings.Contains(err.Error(), invalidConfig.errorMsg) {
t.Errorf("Expected error %s does not contain %s", err.Error(), invalidConfig.errorMsg)
}
}
}
func TestMissingBearerAuthFile(t *testing.T) {
cfg := HTTPClientConfig{
BearerTokenFile: MissingBearerTokenFile,
TLSConfig: TLSConfig{
CAFile: TLSCAChainPath,
CertFile: BarneyCertificatePath,
KeyFile: BarneyKeyNoPassPath,
ServerName: "",
InsecureSkipVerify: false},
}
handler := func(w http.ResponseWriter, r *http.Request) {
bearer := r.Header.Get("Authorization")
if bearer != ExpectedBearer {
fmt.Fprintf(w, "The expected Bearer Authorization (%s) differs from the obtained Bearer Authorization (%s)",
ExpectedBearer, bearer)
} else {
fmt.Fprint(w, ExpectedMessage)
}
}
testServer, err := newTestServer(handler)
if err != nil {
t.Fatal(err.Error())
}
defer testServer.Close()
client, err := NewClientFromConfig(cfg, "test")
if err != nil {
t.Fatal(err)
}
_, err = client.Get(testServer.URL)
if err == nil {
t.Fatal("No error is returned here")
}
if !strings.Contains(err.Error(), "unable to read bearer token file missing/bearer.token: open missing/bearer.token: no such file or directory") {
t.Fatal("wrong error message being returned")
}
}
func TestBearerAuthRoundTripper(t *testing.T) {
const (
newBearerToken = "goodbyeandthankyouforthefish"
)
fakeRoundTripper := NewRoundTripCheckRequest(func(req *http.Request) {
bearer := req.Header.Get("Authorization")
if bearer != ExpectedBearer {
t.Errorf("The expected Bearer Authorization (%s) differs from the obtained Bearer Authorization (%s)",
ExpectedBearer, bearer)
}
}, nil, nil)
// Normal flow.
bearerAuthRoundTripper := NewBearerAuthRoundTripper(BearerToken, fakeRoundTripper)
request, _ := http.NewRequest("GET", "/hitchhiker", nil)
request.Header.Set("User-Agent", "Douglas Adams mind")
bearerAuthRoundTripper.RoundTrip(request)
// Should honor already Authorization header set.
bearerAuthRoundTripperShouldNotModifyExistingAuthorization := NewBearerAuthRoundTripper(newBearerToken, fakeRoundTripper)
request, _ = http.NewRequest("GET", "/hitchhiker", nil)
request.Header.Set("Authorization", ExpectedBearer)
bearerAuthRoundTripperShouldNotModifyExistingAuthorization.RoundTrip(request)
}
func TestBearerAuthFileRoundTripper(t *testing.T) {
const (
newBearerToken = "goodbyeandthankyouforthefish"
)
fakeRoundTripper := NewRoundTripCheckRequest(func(req *http.Request) {
bearer := req.Header.Get("Authorization")
if bearer != ExpectedBearer {
t.Errorf("The expected Bearer Authorization (%s) differs from the obtained Bearer Authorization (%s)",
ExpectedBearer, bearer)
}
}, nil, nil)
// Normal flow.
bearerAuthRoundTripper := NewBearerAuthFileRoundTripper(BearerTokenFile, fakeRoundTripper)
request, _ := http.NewRequest("GET", "/hitchhiker", nil)
request.Header.Set("User-Agent", "Douglas Adams mind")
bearerAuthRoundTripper.RoundTrip(request)
// Should honor already Authorization header set.
bearerAuthRoundTripperShouldNotModifyExistingAuthorization := NewBearerAuthFileRoundTripper(MissingBearerTokenFile, fakeRoundTripper)
request, _ = http.NewRequest("GET", "/hitchhiker", nil)
request.Header.Set("Authorization", ExpectedBearer)
bearerAuthRoundTripperShouldNotModifyExistingAuthorization.RoundTrip(request)
}
func TestTLSConfig(t *testing.T) {
configTLSConfig := TLSConfig{
CAFile: TLSCAChainPath,
CertFile: BarneyCertificatePath,
KeyFile: BarneyKeyNoPassPath,
ServerName: "localhost",
InsecureSkipVerify: false}
tlsCAChain, err := ioutil.ReadFile(TLSCAChainPath)
if err != nil {
t.Fatalf("Can't read the CA certificate chain (%s)",
TLSCAChainPath)
}
rootCAs := x509.NewCertPool()
rootCAs.AppendCertsFromPEM(tlsCAChain)
barneyCertificate, err := tls.LoadX509KeyPair(BarneyCertificatePath, BarneyKeyNoPassPath)
if err != nil {
t.Fatalf("Can't load the client key pair ('%s' and '%s'). Reason: %s",
BarneyCertificatePath, BarneyKeyNoPassPath, err)
}
expectedTLSConfig := &tls.Config{
RootCAs: rootCAs,
Certificates: []tls.Certificate{barneyCertificate},
ServerName: configTLSConfig.ServerName,
InsecureSkipVerify: configTLSConfig.InsecureSkipVerify}
expectedTLSConfig.BuildNameToCertificate()
tlsConfig, err := NewTLSConfig(&configTLSConfig)
if err != nil {
t.Fatalf("Can't create a new TLS Config from a configuration (%s).", err)
}
if !reflect.DeepEqual(tlsConfig, expectedTLSConfig) {
t.Fatalf("Unexpected TLS Config result: \n\n%+v\n expected\n\n%+v", tlsConfig, expectedTLSConfig)
}
}
func TestTLSConfigEmpty(t *testing.T) {
configTLSConfig := TLSConfig{
CAFile: "",
CertFile: "",
KeyFile: "",
ServerName: "",
InsecureSkipVerify: true}
expectedTLSConfig := &tls.Config{
InsecureSkipVerify: configTLSConfig.InsecureSkipVerify}
expectedTLSConfig.BuildNameToCertificate()
tlsConfig, err := NewTLSConfig(&configTLSConfig)
if err != nil {
t.Fatalf("Can't create a new TLS Config from a configuration (%s).", err)
}
if !reflect.DeepEqual(tlsConfig, expectedTLSConfig) {
t.Fatalf("Unexpected TLS Config result: \n\n%+v\n expected\n\n%+v", tlsConfig, expectedTLSConfig)
}
}
func TestTLSConfigInvalidCA(t *testing.T) {
var invalidTLSConfig = []struct {
configTLSConfig TLSConfig
errorMessage string
}{
{
configTLSConfig: TLSConfig{
CAFile: MissingCA,
CertFile: "",
KeyFile: "",
ServerName: "",
InsecureSkipVerify: false},
errorMessage: fmt.Sprintf("unable to use specified CA cert %s:", MissingCA),
}, {
configTLSConfig: TLSConfig{
CAFile: "",
CertFile: MissingCert,
KeyFile: BarneyKeyNoPassPath,
ServerName: "",
InsecureSkipVerify: false},
errorMessage: fmt.Sprintf("unable to use specified client cert (%s) & key (%s):", MissingCert, BarneyKeyNoPassPath),
}, {
configTLSConfig: TLSConfig{
CAFile: "",
CertFile: BarneyCertificatePath,
KeyFile: MissingKey,
ServerName: "",
InsecureSkipVerify: false},
errorMessage: fmt.Sprintf("unable to use specified client cert (%s) & key (%s):", BarneyCertificatePath, MissingKey),
},
}
for _, anInvalididTLSConfig := range invalidTLSConfig {
tlsConfig, err := NewTLSConfig(&anInvalididTLSConfig.configTLSConfig)
if tlsConfig != nil && err == nil {
t.Errorf("The TLS Config could be created even with this %+v", anInvalididTLSConfig.configTLSConfig)
continue
}
if !strings.Contains(err.Error(), anInvalididTLSConfig.errorMessage) {
t.Errorf("The expected error should contain %s, but got %s", anInvalididTLSConfig.errorMessage, err)
}
}
}
func TestBasicAuthNoPassword(t *testing.T) {
cfg, _, err := LoadHTTPConfigFile("testdata/http.conf.basic-auth.no-password.yaml")
if err != nil {
t.Errorf("Error loading HTTP client config: %v", err)
}
client, err := NewClientFromConfig(*cfg, "test")
if err != nil {
t.Errorf("Error creating HTTP Client: %v", err)
}
rt, ok := client.Transport.(*basicAuthRoundTripper)
if !ok {
t.Fatalf("Error casting to basic auth transport, %v", client.Transport)
}
if rt.username != "user" {
t.Errorf("Bad HTTP client username: %s", rt.username)
}
if string(rt.password) != "" {
t.Errorf("Expected empty HTTP client password: %s", rt.password)
}
if string(rt.passwordFile) != "" {
t.Errorf("Expected empty HTTP client passwordFile: %s", rt.passwordFile)
}
}
func TestBasicAuthNoUsername(t *testing.T) {
cfg, _, err := LoadHTTPConfigFile("testdata/http.conf.basic-auth.no-username.yaml")
if err != nil {
t.Errorf("Error loading HTTP client config: %v", err)
}
client, err := NewClientFromConfig(*cfg, "test")
if err != nil {
t.Errorf("Error creating HTTP Client: %v", err)
}
rt, ok := client.Transport.(*basicAuthRoundTripper)
if !ok {
t.Fatalf("Error casting to basic auth transport, %v", client.Transport)
}
if rt.username != "" {
t.Errorf("Got unexpected username: %s", rt.username)
}
if string(rt.password) != "secret" {
t.Errorf("Unexpected HTTP client password: %s", string(rt.password))
}
if string(rt.passwordFile) != "" {
t.Errorf("Expected empty HTTP client passwordFile: %s", rt.passwordFile)
}
}
func TestBasicAuthPasswordFile(t *testing.T) {
cfg, _, err := LoadHTTPConfigFile("testdata/http.conf.basic-auth.good.yaml")
if err != nil {
t.Errorf("Error loading HTTP client config: %v", err)
}
client, err := NewClientFromConfig(*cfg, "test")
if err != nil {
t.Errorf("Error creating HTTP Client: %v", err)
}
rt, ok := client.Transport.(*basicAuthRoundTripper)
if !ok {
t.Errorf("Error casting to basic auth transport, %v", client.Transport)
}
if rt.username != "user" {
t.Errorf("Bad HTTP client username: %s", rt.username)
}
if string(rt.password) != "" {
t.Errorf("Bad HTTP client password: %s", rt.password)
}
if string(rt.passwordFile) != "testdata/basic-auth-password" {
t.Errorf("Bad HTTP client passwordFile: %s", rt.passwordFile)
}
}
func TestHideHTTPClientConfigSecrets(t *testing.T) {
c, _, err := LoadHTTPConfigFile("testdata/http.conf.good.yml")
if err != nil {
t.Errorf("Error parsing %s: %s", "testdata/http.conf.good.yml", err)
}
// String method must not reveal authentication credentials.
s := c.String()
if strings.Contains(s, "mysecret") {
t.Fatal("http client config's String method reveals authentication credentials.")
}
}
func TestValidateHTTPConfig(t *testing.T) {
cfg, _, err := LoadHTTPConfigFile("testdata/http.conf.good.yml")
if err != nil {
t.Errorf("Error loading HTTP client config: %v", err)
}
err = cfg.Validate()
if err != nil {
t.Fatalf("Error validating %s: %s", "testdata/http.conf.good.yml", err)
}
}
func TestInvalidHTTPConfigs(t *testing.T) {
for _, ee := range invalidHTTPClientConfigs {
_, _, err := LoadHTTPConfigFile(ee.httpClientConfigFile)
if err == nil {
t.Error("Expected error with config but got none")
continue
}
if !strings.Contains(err.Error(), ee.errMsg) {
t.Errorf("Expected error for invalid HTTP client configuration to contain %q but got: %s", ee.errMsg, err)
}
}
}
// LoadHTTPConfig parses the YAML input s into a HTTPClientConfig.
func LoadHTTPConfig(s string) (*HTTPClientConfig, error) {
cfg := &HTTPClientConfig{}
err := yaml.UnmarshalStrict([]byte(s), cfg)
if err != nil {
return nil, err
}
return cfg, nil
}
// LoadHTTPConfigFile parses the given YAML file into a HTTPClientConfig.
func LoadHTTPConfigFile(filename string) (*HTTPClientConfig, []byte, error) {
content, err := ioutil.ReadFile(filename)
if err != nil {
return nil, nil, err
}
cfg, err := LoadHTTPConfig(string(content))
if err != nil {
return nil, nil, err
}
return cfg, content, nil
}
type roundTrip struct {
theResponse *http.Response
theError error
}
func (rt *roundTrip) RoundTrip(r *http.Request) (*http.Response, error) {
return rt.theResponse, rt.theError
}
type roundTripCheckRequest struct {
checkRequest func(*http.Request)
roundTrip
}
func (rt *roundTripCheckRequest) RoundTrip(r *http.Request) (*http.Response, error) {
rt.checkRequest(r)
return rt.theResponse, rt.theError
}
// NewRoundTripCheckRequest creates a new instance of a type that implements http.RoundTripper,
// which before returning theResponse and theError, executes checkRequest against a http.Request.
func NewRoundTripCheckRequest(checkRequest func(*http.Request), theResponse *http.Response, theError error) http.RoundTripper {
return &roundTripCheckRequest{
checkRequest: checkRequest,
roundTrip: roundTrip{
theResponse: theResponse,
theError: theError}}
}

View File

@@ -1,27 +0,0 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEpQIBAAKCAQEAxmYjfBZhZbAup9uSULehoqPCv/U+77ETxUNyS2nviWEHDAb/
pFS8Btx4oCQ1ECVSyxcUmXSlrvDjMY4sisOHvndNRlGi274M5a8Q5yD1BUqvxq3u
XB/+SYNVShBzaswrSjpzMe89AlOPxPjnE14OXh00j2hHunOG4jhlWgJnY0YyvUQQ
YWO6KrmKMiZ4MgmY0SWh/ZhlkDJPtkp3aUVM2sheCru/70E9viLGfdlhc2pIMshy
wNp4/5IkHBZwbqXFFGX4sRtSXI/auZNvcHOBse+3e3BonWvBWS2lIYbzpX3vLB7B
E9BGIxWn1fgNQr14yFPaccSszBvgtmEUONolnwIDAQABAoIBAQC7nBhQHgXKGBl2
Z97rb0pstrjRtsLl/Cg68LWi9LEr0tHMIM4bgnkvb8qtfK+k7fZl0BSNrE2EqYvd
75jVO2MgzEYJieccLpKZm7u7JGIut9qSYSU2fpaCw6uiVv4dbqY9EhqejKG/km8w
j0JMATRK8Qkj1zOE7/wL7dKBlCZaK3u+OT17spuA/21PG/cLiPaSGSA3CU/eqbkU
BD6JeBxp33XNTytwWoOvarsigpL0dGqQ7+qhGq6t69qFfWoe9rimV7Ya+tB9zF/U
HzOIEspOYvzxe+C7VJjlVFr4haMYmsrO9qRUJ2ofp49OLVdfEANsdVISSvS63BEp
gBZN8Ko5AoGBAO1z8y8YCsI+2vBG6nxZ1eMba0KHi3bS8db1TaenJBV22w6WQATh
hEaU6VLMFcMvrOUjXN/7HJfnEMyvFT6gb9obPDVEMZw88s9lVN6njgGLZR/jodyN
7N7utLopN043Ra0WfEILAXPSz8esT1yn05OZV6AFHxJEWMrX3/4+spCLAoGBANXl
RomieVY4u3FF/uzhbzKNNb9ETxrQuexfbangKp5eLniwnr2SQWIbyPzeurwp15J8
HvxB2vpNvs1khSwNx9dQfMdiUVPGLWj7MimAHTHsnQ9LVV9W28ghuSWbjQDGTUt1
WCCu1MkKIOzupbi+zgsNlI33yilRQKAb9SRxdy29AoGBAOKpvyZiPcrkMxwPpb/k
BU7QGpgcSR25CQ+Xg3QZEVHH7h1DgYLnPtwdQ4g8tj1mohTsp7hKvSWndRrdulrY
zUyWmOeD3BN2/pTI9rW/nceNp49EPHsLo2O+2xelRlzMWB98ikqEtPM59gt1SSB6
N3X6d3GR0fIe+d9PKEtK0Cs3AoGAZ9r8ReXSvm+ra5ON9Nx8znHMEAON2TpRnBi1
uY7zgpO+QrGXUfqKrqVJEKbgym4SkribnuYm+fP32eid1McYKk6VV4ZAcMm/0MJv
F8Fx64S0ufFdEX6uFl1xdXYyn5apfyMJ2EyrWrYFSKWTZ8GVb753S/tteGRQWa1Z
eQly0Y0CgYEAnI6G9KFvXI+MLu5y2LPYAwsesDFzaWwyDl96ioQTA9hNSrjR33Vw
xwpiEe0T/WKF8NQ0QWnrQDbTvuCvZUK37TVxscYWuItL6vnBrYqr4Ck0j1BcGwV5
jT581A/Vw8JJiR/vfcxgmrFYqoUmkMKDmCN1oImfz09GtQ4jQ1rlxz8=
-----END RSA PRIVATE KEY-----

View File

@@ -1,96 +0,0 @@
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 2 (0x2)
Signature Algorithm: sha1WithRSAEncryption
Issuer: C=NO, O=Green AS, OU=Green Certificate Authority, CN=Green TLS CA
Validity
Not Before: Jul 13 04:02:47 2017 GMT
Not After : Jul 13 04:02:47 2019 GMT
Subject: C=NO, O=Telenor AS, OU=Support, CN=Barney Rubble
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:c6:66:23:7c:16:61:65:b0:2e:a7:db:92:50:b7:
a1:a2:a3:c2:bf:f5:3e:ef:b1:13:c5:43:72:4b:69:
ef:89:61:07:0c:06:ff:a4:54:bc:06:dc:78:a0:24:
35:10:25:52:cb:17:14:99:74:a5:ae:f0:e3:31:8e:
2c:8a:c3:87:be:77:4d:46:51:a2:db:be:0c:e5:af:
10:e7:20:f5:05:4a:af:c6:ad:ee:5c:1f:fe:49:83:
55:4a:10:73:6a:cc:2b:4a:3a:73:31:ef:3d:02:53:
8f:c4:f8:e7:13:5e:0e:5e:1d:34:8f:68:47:ba:73:
86:e2:38:65:5a:02:67:63:46:32:bd:44:10:61:63:
ba:2a:b9:8a:32:26:78:32:09:98:d1:25:a1:fd:98:
65:90:32:4f:b6:4a:77:69:45:4c:da:c8:5e:0a:bb:
bf:ef:41:3d:be:22:c6:7d:d9:61:73:6a:48:32:c8:
72:c0:da:78:ff:92:24:1c:16:70:6e:a5:c5:14:65:
f8:b1:1b:52:5c:8f:da:b9:93:6f:70:73:81:b1:ef:
b7:7b:70:68:9d:6b:c1:59:2d:a5:21:86:f3:a5:7d:
ef:2c:1e:c1:13:d0:46:23:15:a7:d5:f8:0d:42:bd:
78:c8:53:da:71:c4:ac:cc:1b:e0:b6:61:14:38:da:
25:9f
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature
X509v3 Basic Constraints:
CA:FALSE
X509v3 Extended Key Usage:
TLS Web Client Authentication
X509v3 Subject Key Identifier:
F4:17:02:DD:1B:01:AB:C5:BC:17:A4:5C:4B:75:8E:EC:B1:E0:C8:F1
X509v3 Authority Key Identifier:
keyid:AE:42:88:75:DD:05:A6:8E:48:7F:50:69:F9:B7:34:23:49:B8:B4:71
Authority Information Access:
CA Issuers - URI:http://green.no/ca/tls-ca.cer
X509v3 CRL Distribution Points:
Full Name:
URI:http://green.no/ca/tls-ca.crl
X509v3 Subject Alternative Name:
email:barney@telenor.no
Signature Algorithm: sha1WithRSAEncryption
96:9a:c5:41:8a:2f:4a:c4:80:d9:2b:1a:cf:07:85:e9:b6:18:
01:20:41:b9:c3:d4:ca:d3:2d:66:c3:1d:52:7f:25:d7:92:0c:
e9:a9:ae:e6:2e:fa:9d:0a:cf:84:b9:03:f2:63:e3:d3:c9:70:
6a:ac:04:5e:a9:2d:a2:43:7a:34:60:f7:a9:32:e1:48:ec:c6:
03:ac:b3:06:2e:48:6e:d0:35:11:31:3d:0c:04:66:41:e6:b2:
ec:8c:68:f8:e4:bc:47:85:39:60:69:a9:8a:ee:2f:56:88:8a:
19:45:d0:84:8e:c2:27:2c:82:9c:07:6c:34:ae:41:61:63:f9:
32:cb:8b:33:ea:2c:15:5f:f9:35:b0:3c:51:4d:5f:30:de:0b:
88:28:94:79:f3:bd:69:37:ad:12:20:e1:6b:1d:b6:77:d9:83:
db:81:a4:53:6c:0f:6a:17:5e:2b:c1:94:c6:42:e3:73:cd:9e:
79:1b:8c:89:cd:da:ce:b0:f4:21:c5:32:25:04:6e:68:9f:a7:
ca:f4:c5:86:e5:4e:d9:fd:69:73:e6:15:50:6e:76:0f:73:5e:
7a:a3:f4:dc:15:4a:ab:bb:3c:9a:fa:9f:01:7a:5c:47:a9:a3:
68:1c:49:e0:37:37:77:af:87:07:16:e4:e1:d7:98:39:15:a6:
51:5d:4c:db
-----BEGIN CERTIFICATE-----
MIIEITCCAwmgAwIBAgIBAjANBgkqhkiG9w0BAQUFADBdMQswCQYDVQQGEwJOTzER
MA8GA1UECgwIR3JlZW4gQVMxJDAiBgNVBAsMG0dyZWVuIENlcnRpZmljYXRlIEF1
dGhvcml0eTEVMBMGA1UEAwwMR3JlZW4gVExTIENBMB4XDTE3MDcxMzA0MDI0N1oX
DTE5MDcxMzA0MDI0N1owTDELMAkGA1UEBhMCTk8xEzARBgNVBAoMClRlbGVub3Ig
QVMxEDAOBgNVBAsMB1N1cHBvcnQxFjAUBgNVBAMMDUJhcm5leSBSdWJibGUwggEi
MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDGZiN8FmFlsC6n25JQt6Gio8K/
9T7vsRPFQ3JLae+JYQcMBv+kVLwG3HigJDUQJVLLFxSZdKWu8OMxjiyKw4e+d01G
UaLbvgzlrxDnIPUFSq/Gre5cH/5Jg1VKEHNqzCtKOnMx7z0CU4/E+OcTXg5eHTSP
aEe6c4biOGVaAmdjRjK9RBBhY7oquYoyJngyCZjRJaH9mGWQMk+2SndpRUzayF4K
u7/vQT2+IsZ92WFzakgyyHLA2nj/kiQcFnBupcUUZfixG1Jcj9q5k29wc4Gx77d7
cGida8FZLaUhhvOlfe8sHsET0EYjFafV+A1CvXjIU9pxxKzMG+C2YRQ42iWfAgMB
AAGjgfwwgfkwDgYDVR0PAQH/BAQDAgeAMAkGA1UdEwQCMAAwEwYDVR0lBAwwCgYI
KwYBBQUHAwIwHQYDVR0OBBYEFPQXAt0bAavFvBekXEt1juyx4MjxMB8GA1UdIwQY
MBaAFK5CiHXdBaaOSH9Qafm3NCNJuLRxMDkGCCsGAQUFBwEBBC0wKzApBggrBgEF
BQcwAoYdaHR0cDovL2dyZWVuLm5vL2NhL3Rscy1jYS5jZXIwLgYDVR0fBCcwJTAj
oCGgH4YdaHR0cDovL2dyZWVuLm5vL2NhL3Rscy1jYS5jcmwwHAYDVR0RBBUwE4ER
YmFybmV5QHRlbGVub3Iubm8wDQYJKoZIhvcNAQEFBQADggEBAJaaxUGKL0rEgNkr
Gs8Hhem2GAEgQbnD1MrTLWbDHVJ/JdeSDOmpruYu+p0Kz4S5A/Jj49PJcGqsBF6p
LaJDejRg96ky4UjsxgOsswYuSG7QNRExPQwEZkHmsuyMaPjkvEeFOWBpqYruL1aI
ihlF0ISOwicsgpwHbDSuQWFj+TLLizPqLBVf+TWwPFFNXzDeC4golHnzvWk3rRIg
4WsdtnfZg9uBpFNsD2oXXivBlMZC43PNnnkbjInN2s6w9CHFMiUEbmifp8r0xYbl
Ttn9aXPmFVBudg9zXnqj9NwVSqu7PJr6nwF6XEepo2gcSeA3N3evhwcW5OHXmDkV
plFdTNs=
-----END CERTIFICATE-----

View File

@@ -1 +0,0 @@
foobar

View File

@@ -1 +0,0 @@
theanswertothegreatquestionoflifetheuniverseandeverythingisfortytwo

View File

@@ -1,3 +0,0 @@
basic_auth:
username: user
password_file: testdata/basic-auth-password

View File

@@ -1,2 +0,0 @@
basic_auth:
username: user

View File

@@ -1,2 +0,0 @@
basic_auth:
password: secret

View File

@@ -1,4 +0,0 @@
basic_auth:
username: user
password: foo
password_file: testdata/basic-auth-password

View File

@@ -1,5 +0,0 @@
basic_auth:
username: username
password: "mysecret"
bearer_token: mysecret
bearer_token_file: file

View File

@@ -1,4 +0,0 @@
basic_auth:
username: username
password: mysecret
bearer_token_file: file

View File

@@ -1,4 +0,0 @@
basic_auth:
username: username
password: "mysecret"
proxy_url: "http://remote.host"

View File

@@ -1 +0,0 @@
bearer_token_file: file

View File

@@ -1,96 +0,0 @@
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 4 (0x4)
Signature Algorithm: sha1WithRSAEncryption
Issuer: C=NO, O=Green AS, OU=Green Certificate Authority, CN=Green TLS CA
Validity
Not Before: Jul 26 12:47:08 2017 GMT
Not After : Jul 26 12:47:08 2019 GMT
Subject: C=NO, O=Green AS, OU=Green Certificate Authority, CN=Green TLS CA
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:97:43:c5:f6:24:b8:ce:30:12:70:ea:17:9c:c0:
ce:f2:ef:58:8b:12:7d:46:5e:01:f1:1a:93:b2:3e:
d8:cf:99:bc:10:32:f1:12:b0:ef:00:6c:d6:c4:45:
85:a8:33:7b:cd:ec:8f:4a:92:d0:5a:4a:41:69:7f:
e3:dd:7e:71:d2:21:9c:df:43:b5:6c:60:bb:2a:12:
a8:08:cf:c5:ee:08:7d:48:ea:4b:54:e4:82:d9:88:
b0:b8:5e:02:12:cb:0e:09:99:b7:5f:42:b6:d7:26:
34:0f:4a:e7:fc:ac:9c:59:cd:a1:50:4c:88:5f:f1:
d2:7e:5b:21:41:f0:37:50:80:48:71:50:26:61:26:
79:64:4b:7e:91:8d:0e:f4:27:fe:19:80:bf:39:55:
b7:f3:d0:cd:61:6c:d8:c1:c7:d3:26:77:92:1a:14:
42:56:cb:bc:fd:1a:4a:eb:17:d8:8d:af:d1:c0:46:
9f:f0:40:5e:0e:34:2f:e7:db:be:66:fd:89:0b:6b:
8c:71:c1:0b:0a:c5:c4:c4:eb:7f:44:c1:75:36:23:
fd:ed:b6:ee:87:d9:88:47:e1:4b:7c:60:53:e7:85:
1c:2f:82:4b:2b:5e:63:1a:49:17:36:2c:fc:39:23:
49:22:4d:43:b5:51:22:12:24:9e:31:44:d8:16:4e:
a8:eb
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Basic Constraints:
CA:FALSE
X509v3 Extended Key Usage:
TLS Web Server Authentication, TLS Web Client Authentication
X509v3 Subject Key Identifier:
70:A9:FB:44:66:3C:63:96:E6:05:B2:74:47:C8:18:7E:43:6D:EE:8B
X509v3 Authority Key Identifier:
keyid:AE:42:88:75:DD:05:A6:8E:48:7F:50:69:F9:B7:34:23:49:B8:B4:71
Authority Information Access:
CA Issuers - URI:http://green.no/ca/tls-ca.cer
X509v3 CRL Distribution Points:
Full Name:
URI:http://green.no/ca/tls-ca.crl
X509v3 Subject Alternative Name:
IP Address:127.0.0.1, IP Address:127.0.0.0, DNS:localhost
Signature Algorithm: sha1WithRSAEncryption
56:1e:b8:52:ba:f5:72:42:ad:15:71:c1:5e:00:63:c9:4d:56:
f2:8d:a3:a9:91:db:d0:b5:1b:88:80:93:80:28:48:b2:d0:a9:
d0:ea:de:40:78:cc:57:8c:00:b8:65:99:68:95:98:9b:fb:a2:
43:21:ea:00:37:01:77:c7:3b:1a:ec:58:2d:25:9c:ad:23:41:
5e:ae:fd:ac:2f:26:81:b8:a7:49:9b:5a:10:fe:ad:c3:86:ab:
59:67:b0:c7:81:72:95:60:b5:cb:fc:9f:ad:27:16:50:85:76:
33:16:20:2c:1f:c6:14:09:0c:48:9f:c0:19:16:c9:fa:b0:d8:
bf:b7:8d:a7:aa:eb:fe:f8:6f:dd:2b:83:ee:c7:8a:df:c8:59:
e6:2e:13:1f:57:cc:6f:31:db:f7:b7:5c:3f:78:ad:22:2c:48:
bb:6d:c4:ab:dc:c1:76:34:29:d9:1e:67:e0:ac:37:2b:90:f9:
71:bd:cf:a1:01:b9:eb:0b:0b:79:2e:8b:52:3d:8e:13:97:c8:
05:a3:ef:68:82:49:12:2a:25:1a:48:49:b8:7c:3c:66:0d:74:
f9:00:8c:5b:57:d7:76:b1:26:95:86:b2:2e:a3:b2:9c:e0:eb:
2d:fc:77:03:8f:cd:56:46:3a:c9:6a:fa:72:e3:19:d8:ef:de:
4b:36:95:79
-----BEGIN CERTIFICATE-----
MIIEQjCCAyqgAwIBAgIBBDANBgkqhkiG9w0BAQUFADBdMQswCQYDVQQGEwJOTzER
MA8GA1UECgwIR3JlZW4gQVMxJDAiBgNVBAsMG0dyZWVuIENlcnRpZmljYXRlIEF1
dGhvcml0eTEVMBMGA1UEAwwMR3JlZW4gVExTIENBMB4XDTE3MDcyNjEyNDcwOFoX
DTE5MDcyNjEyNDcwOFowXTELMAkGA1UEBhMCTk8xETAPBgNVBAoMCEdyZWVuIEFT
MSQwIgYDVQQLDBtHcmVlbiBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkxFTATBgNVBAMM
DEdyZWVuIFRMUyBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAJdD
xfYkuM4wEnDqF5zAzvLvWIsSfUZeAfEak7I+2M+ZvBAy8RKw7wBs1sRFhagze83s
j0qS0FpKQWl/491+cdIhnN9DtWxguyoSqAjPxe4IfUjqS1TkgtmIsLheAhLLDgmZ
t19CttcmNA9K5/ysnFnNoVBMiF/x0n5bIUHwN1CASHFQJmEmeWRLfpGNDvQn/hmA
vzlVt/PQzWFs2MHH0yZ3khoUQlbLvP0aSusX2I2v0cBGn/BAXg40L+fbvmb9iQtr
jHHBCwrFxMTrf0TBdTYj/e227ofZiEfhS3xgU+eFHC+CSyteYxpJFzYs/DkjSSJN
Q7VRIhIknjFE2BZOqOsCAwEAAaOCAQswggEHMA4GA1UdDwEB/wQEAwIFoDAJBgNV
HRMEAjAAMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjAdBgNVHQ4EFgQU
cKn7RGY8Y5bmBbJ0R8gYfkNt7oswHwYDVR0jBBgwFoAUrkKIdd0Fpo5If1Bp+bc0
I0m4tHEwOQYIKwYBBQUHAQEELTArMCkGCCsGAQUFBzAChh1odHRwOi8vZ3JlZW4u
bm8vY2EvdGxzLWNhLmNlcjAuBgNVHR8EJzAlMCOgIaAfhh1odHRwOi8vZ3JlZW4u
bm8vY2EvdGxzLWNhLmNybDAgBgNVHREEGTAXhwR/AAABhwR/AAAAgglsb2NhbGhv
c3QwDQYJKoZIhvcNAQEFBQADggEBAFYeuFK69XJCrRVxwV4AY8lNVvKNo6mR29C1
G4iAk4AoSLLQqdDq3kB4zFeMALhlmWiVmJv7okMh6gA3AXfHOxrsWC0lnK0jQV6u
/awvJoG4p0mbWhD+rcOGq1lnsMeBcpVgtcv8n60nFlCFdjMWICwfxhQJDEifwBkW
yfqw2L+3jaeq6/74b90rg+7Hit/IWeYuEx9XzG8x2/e3XD94rSIsSLttxKvcwXY0
KdkeZ+CsNyuQ+XG9z6EBuesLC3kui1I9jhOXyAWj72iCSRIqJRpISbh8PGYNdPkA
jFtX13axJpWGsi6jspzg6y38dwOPzVZGOslq+nLjGdjv3ks2lXk=
-----END CERTIFICATE-----

View File

@@ -1,28 +0,0 @@
-----BEGIN PRIVATE KEY-----
MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCXQ8X2JLjOMBJw
6hecwM7y71iLEn1GXgHxGpOyPtjPmbwQMvESsO8AbNbERYWoM3vN7I9KktBaSkFp
f+PdfnHSIZzfQ7VsYLsqEqgIz8XuCH1I6ktU5ILZiLC4XgISyw4JmbdfQrbXJjQP
Suf8rJxZzaFQTIhf8dJ+WyFB8DdQgEhxUCZhJnlkS36RjQ70J/4ZgL85Vbfz0M1h
bNjBx9Mmd5IaFEJWy7z9GkrrF9iNr9HARp/wQF4ONC/n275m/YkLa4xxwQsKxcTE
639EwXU2I/3ttu6H2YhH4Ut8YFPnhRwvgksrXmMaSRc2LPw5I0kiTUO1USISJJ4x
RNgWTqjrAgMBAAECggEAVurwo4FyV7gzwIIi00XPJLT3ceJL7dUy1HHrEG8gchnq
gHxlHdJhYyMnPVydcosyxp75r2YxJtCoSZDdRHbVvGLoGzpy0zW6FnDl8TpCh4aF
RxKp+rvbnFf5A9ew5U+cX1PelHRnT7V6EJeAOiaNKOUJnnR7oHX59/UxZQw9HJnX
3H4xUdRDmSS3BGKXEswbd7beQjqJtEIkbConfaw32yEod0w2MC0LI4miZ87/6Hsk
pyvfpeYxXp4z3BTvFBbf/GEBFuozu63VWHayB9PDmEN/TlphoQpJQihdR2r1lz/H
I5QwVlFTDvUSFitNLu+FoaHOfgLprQndbojBXb+tcQKBgQDHCPyM4V7k97RvJgmB
ELgZiDYufDrjRLXvFzrrZ7ySU3N+nx3Gz/EhtgbHicDjnRVagHBIwi/QAfBJksCd
xcioY5k2OW+8PSTsfFZTAA6XwJp/LGfJik/JjvAVv5CnxBu9lYG4WiSBJFp59ojC
zTmfEuB4GPwrjQvzjlqaSpij9QKBgQDCjriwAB2UJIdlgK+DkryLqgim5I4cteB3
+juVKz+S8ufFmVvmIXkyDcpyy/26VLC6esy8dV0JoWc4EeitoJvQD1JVZ5+CBTY+
r9umx18oe2A/ZgcEf/A3Zd94jM1MwriF6YC+eIOhwhpi7T1xTLf3hc9B0OJ5B1mA
vob9rGDtXwKBgD4rkW+UCictNIAvenKFPWxEPuBgT6ij0sx/DhlwCtgOFxprK0rp
syFbkVyMq+KtM3lUez5O4c5wfJUOsPnXSOlISxhD8qHy23C/GdvNPcGrGNc2kKjE
ek20R0wTzWSJ/jxG0gE6rwJjz5sfJfLrVd9ZbyI0c7hK03vdcHGXcXxtAoGAeGHl
BwnbQ3niyTx53VijD2wTVGjhQgSLstEDowYSnTNtk8eTpG6b1gvQc32jLnMOsyQe
oJGiEr5q5re2GBDjuDZyxGOMv9/Hs7wOlkCQsbS9Vh0kRHWBRlXjk2zT7yYhFMLp
pXFeSW2X9BRFS2CkCCUkm93K9AZHLDE3x6ishNMCgYEAsDsUCzGhI49Aqe+CMP2l
WPZl7SEMYS5AtdC5sLtbLYBl8+rMXVGL2opKXqVFYBYkqMJiHGdX3Ub6XSVKLYkN
vm4PWmlQS24ZT+jlUl4jk6JU6SAlM/o6ixZl5KNR7yQm6zN2O/RHDeYm0urUQ9tF
9dux7LbIFeOoJmoDTWG2+fI=
-----END PRIVATE KEY-----

View File

@@ -1,172 +0,0 @@
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 2 (0x2)
Signature Algorithm: sha1WithRSAEncryption
Issuer: C=NO, O=Green AS, OU=Green Certificate Authority, CN=Green Root CA
Validity
Not Before: Jul 13 03:47:20 2017 GMT
Not After : Jul 13 03:47:20 2027 GMT
Subject: C=NO, O=Green AS, OU=Green Certificate Authority, CN=Green TLS CA
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:b5:5a:b3:7a:7f:6a:5b:e9:ee:62:ee:4f:61:42:
79:93:06:bf:81:fc:9a:1f:b5:80:83:7c:b3:a6:94:
54:58:8a:b1:74:cb:c3:b8:3c:23:a8:69:1f:ca:2b:
af:be:97:ba:31:73:b5:b8:ce:d9:bf:bf:9a:7a:cf:
3a:64:51:83:c9:36:d2:f7:3b:3a:0e:4c:c7:66:2e:
bf:1a:df:ce:10:aa:3d:0f:19:74:03:7e:b5:10:bb:
e8:37:bd:62:f0:42:2d:df:3d:ca:70:50:10:17:ce:
a9:ec:55:8e:87:6f:ce:9a:04:36:14:96:cb:d1:a5:
48:d5:d2:87:02:62:93:4e:21:4a:ff:be:44:f1:d2:
7e:ed:74:da:c2:51:26:8e:03:a0:c2:bd:bd:5f:b0:
50:11:78:fd:ab:1d:04:86:6c:c1:8d:20:bd:05:5f:
51:67:c6:d3:07:95:92:2d:92:90:00:c6:9f:2d:dd:
36:5c:dc:78:10:7c:f6:68:39:1d:2c:e0:e1:26:64:
4f:36:34:66:a7:84:6a:90:15:3a:94:b7:79:b1:47:
f5:d2:51:95:54:bf:92:76:9a:b9:88:ee:63:f9:6c:
0d:38:c6:b6:1c:06:43:ed:24:1d:bb:6c:72:48:cc:
8c:f4:35:bc:43:fe:a6:96:4c:31:5f:82:0d:0d:20:
2a:3d
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Key Usage: critical
Certificate Sign, CRL Sign
X509v3 Basic Constraints: critical
CA:TRUE, pathlen:0
X509v3 Subject Key Identifier:
AE:42:88:75:DD:05:A6:8E:48:7F:50:69:F9:B7:34:23:49:B8:B4:71
X509v3 Authority Key Identifier:
keyid:60:93:53:2F:C7:CF:2A:D7:F3:09:28:F6:3C:AE:9C:50:EC:93:63:E5
Authority Information Access:
CA Issuers - URI:http://green.no/ca/root-ca.cer
X509v3 CRL Distribution Points:
Full Name:
URI:http://green.no/ca/root-ca.crl
Signature Algorithm: sha1WithRSAEncryption
15:a7:ac:d7:25:9e:2a:d4:d1:14:b4:99:38:3d:2f:73:61:2a:
d9:b6:8b:13:ea:fe:db:78:d9:0a:6c:df:26:6e:c1:d5:4a:97:
42:19:dd:97:05:03:e4:2b:fc:1e:1f:38:3c:4e:b0:3b:8c:38:
ad:2b:65:fa:35:2d:81:8e:e0:f6:0a:89:4c:38:97:01:4b:9c:
ac:4e:e1:55:17:ef:0a:ad:a7:eb:1e:4b:86:23:12:f1:52:69:
cb:a3:8a:ce:fb:14:8b:86:d7:bb:81:5e:bd:2a:c7:a7:79:58:
00:10:c0:db:ff:d4:a5:b9:19:74:b3:23:19:4a:1f:78:4b:a8:
b6:f6:20:26:c1:69:f9:89:7f:b8:1c:3b:a2:f9:37:31:80:2c:
b0:b6:2b:d2:84:44:d7:42:e4:e6:44:51:04:35:d9:1c:a4:48:
c6:b7:35:de:f2:ae:da:4b:ba:c8:09:42:8d:ed:7a:81:dc:ed:
9d:f0:de:6e:21:b9:01:1c:ad:64:3d:25:4c:91:94:f1:13:18:
bb:89:e9:48:ac:05:73:07:c8:db:bd:69:8e:6f:02:9d:b0:18:
c0:b9:e1:a8:b1:17:50:3d:ac:05:6e:6f:63:4f:b1:73:33:60:
9a:77:d2:81:8a:01:38:43:e9:4c:3c:90:63:a4:99:4b:d2:1b:
f9:1b:ec:ee
-----BEGIN CERTIFICATE-----
MIIECzCCAvOgAwIBAgIBAjANBgkqhkiG9w0BAQUFADBeMQswCQYDVQQGEwJOTzER
MA8GA1UECgwIR3JlZW4gQVMxJDAiBgNVBAsMG0dyZWVuIENlcnRpZmljYXRlIEF1
dGhvcml0eTEWMBQGA1UEAwwNR3JlZW4gUm9vdCBDQTAeFw0xNzA3MTMwMzQ3MjBa
Fw0yNzA3MTMwMzQ3MjBaMF0xCzAJBgNVBAYTAk5PMREwDwYDVQQKDAhHcmVlbiBB
UzEkMCIGA1UECwwbR3JlZW4gQ2VydGlmaWNhdGUgQXV0aG9yaXR5MRUwEwYDVQQD
DAxHcmVlbiBUTFMgQ0EwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1
WrN6f2pb6e5i7k9hQnmTBr+B/JoftYCDfLOmlFRYirF0y8O4PCOoaR/KK6++l7ox
c7W4ztm/v5p6zzpkUYPJNtL3OzoOTMdmLr8a384Qqj0PGXQDfrUQu+g3vWLwQi3f
PcpwUBAXzqnsVY6Hb86aBDYUlsvRpUjV0ocCYpNOIUr/vkTx0n7tdNrCUSaOA6DC
vb1fsFAReP2rHQSGbMGNIL0FX1FnxtMHlZItkpAAxp8t3TZc3HgQfPZoOR0s4OEm
ZE82NGanhGqQFTqUt3mxR/XSUZVUv5J2mrmI7mP5bA04xrYcBkPtJB27bHJIzIz0
NbxD/qaWTDFfgg0NICo9AgMBAAGjgdQwgdEwDgYDVR0PAQH/BAQDAgEGMBIGA1Ud
EwEB/wQIMAYBAf8CAQAwHQYDVR0OBBYEFK5CiHXdBaaOSH9Qafm3NCNJuLRxMB8G
A1UdIwQYMBaAFGCTUy/HzyrX8wko9jyunFDsk2PlMDoGCCsGAQUFBwEBBC4wLDAq
BggrBgEFBQcwAoYeaHR0cDovL2dyZWVuLm5vL2NhL3Jvb3QtY2EuY2VyMC8GA1Ud
HwQoMCYwJKAioCCGHmh0dHA6Ly9ncmVlbi5uby9jYS9yb290LWNhLmNybDANBgkq
hkiG9w0BAQUFAAOCAQEAFaes1yWeKtTRFLSZOD0vc2Eq2baLE+r+23jZCmzfJm7B
1UqXQhndlwUD5Cv8Hh84PE6wO4w4rStl+jUtgY7g9gqJTDiXAUucrE7hVRfvCq2n
6x5LhiMS8VJpy6OKzvsUi4bXu4FevSrHp3lYABDA2//UpbkZdLMjGUofeEuotvYg
JsFp+Yl/uBw7ovk3MYAssLYr0oRE10Lk5kRRBDXZHKRIxrc13vKu2ku6yAlCje16
gdztnfDebiG5ARytZD0lTJGU8RMYu4npSKwFcwfI271pjm8CnbAYwLnhqLEXUD2s
BW5vY0+xczNgmnfSgYoBOEPpTDyQY6SZS9Ib+Rvs7g==
-----END CERTIFICATE-----
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 1 (0x1)
Signature Algorithm: sha1WithRSAEncryption
Issuer: C=NO, O=Green AS, OU=Green Certificate Authority, CN=Green Root CA
Validity
Not Before: Jul 13 03:44:39 2017 GMT
Not After : Dec 31 23:59:59 2030 GMT
Subject: C=NO, O=Green AS, OU=Green Certificate Authority, CN=Green Root CA
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:a7:e8:ed:de:d4:54:08:41:07:40:d5:c0:43:d6:
ab:d3:9e:21:87:c6:13:bf:a7:cf:3d:08:4f:c1:fe:
8f:e5:6c:c5:89:97:e5:27:75:26:c3:2a:73:2d:34:
7c:6f:35:8d:40:66:61:05:c0:eb:e9:b3:38:47:f8:
8b:26:35:2c:df:dc:24:31:fe:72:e3:87:10:d1:f7:
a0:57:b7:f3:b1:1a:fe:c7:4b:f8:7b:14:6d:73:08:
54:eb:63:3c:0c:ce:22:95:5f:3f:f2:6f:89:ae:63:
da:80:74:36:21:13:e8:91:01:58:77:cc:c2:f2:42:
bf:eb:b3:60:a7:21:ed:88:24:7f:eb:ff:07:41:9b:
93:c8:5f:6a:8e:a6:1a:15:3c:bc:e7:0d:fd:05:fd:
3c:c1:1c:1d:1f:57:2b:40:27:62:a1:7c:48:63:c1:
45:e7:2f:20:ed:92:1c:42:94:e4:58:70:7a:b6:d2:
85:c5:61:d8:cd:c6:37:6b:72:3b:7f:af:55:81:d6:
9d:dc:10:c9:d8:0e:81:e4:5e:40:13:2f:20:e8:6b:
46:81:ce:88:47:dd:38:71:3d:ef:21:cc:c0:67:cf:
0a:f4:e9:3f:a8:9d:26:25:2e:23:1e:a3:11:18:cb:
d1:70:1c:9e:7d:09:b1:a4:20:dc:95:15:1d:49:cf:
1b:ad
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Key Usage: critical
Certificate Sign, CRL Sign
X509v3 Basic Constraints: critical
CA:TRUE
X509v3 Subject Key Identifier:
60:93:53:2F:C7:CF:2A:D7:F3:09:28:F6:3C:AE:9C:50:EC:93:63:E5
X509v3 Authority Key Identifier:
keyid:60:93:53:2F:C7:CF:2A:D7:F3:09:28:F6:3C:AE:9C:50:EC:93:63:E5
Signature Algorithm: sha1WithRSAEncryption
a7:77:71:8b:1a:e5:5a:5b:87:54:08:bf:07:3e:cb:99:2f:dc:
0e:8d:63:94:95:83:19:c9:92:82:d5:cb:5b:8f:1f:86:55:bc:
70:01:1d:33:46:ec:99:de:6b:1f:c3:c2:7a:dd:ef:69:ab:96:
58:ec:6c:6f:6c:70:82:71:8a:7f:f0:3b:80:90:d5:64:fa:80:
27:b8:7b:50:69:98:4b:37:99:ad:bf:a2:5b:93:22:5e:96:44:
3c:5a:cf:0c:f4:62:63:4a:6f:72:a7:f6:89:1d:09:26:3d:8f:
a8:86:d4:b4:bc:dd:b3:38:ca:c0:59:16:8c:20:1f:89:35:12:
b4:2d:c0:e9:de:93:e0:39:76:32:fc:80:db:da:44:26:fd:01:
32:74:97:f8:44:ae:fe:05:b1:34:96:13:34:56:73:b4:93:a5:
55:56:d1:01:51:9d:9c:55:e7:38:53:28:12:4e:38:72:0c:8f:
bd:91:4c:45:48:3b:e1:0d:03:5f:58:40:c9:d3:a0:ac:b3:89:
ce:af:27:8a:0f:ab:ec:72:4d:40:77:30:6b:36:fd:32:46:9f:
ee:f9:c4:f5:17:06:0f:4b:d3:88:f5:a4:2f:3d:87:9e:f5:26:
74:f0:c9:dc:cb:ad:d9:a7:8a:d3:71:15:00:d3:5d:9f:4c:59:
3e:24:63:f5
-----BEGIN CERTIFICATE-----
MIIDnDCCAoSgAwIBAgIBATANBgkqhkiG9w0BAQUFADBeMQswCQYDVQQGEwJOTzER
MA8GA1UECgwIR3JlZW4gQVMxJDAiBgNVBAsMG0dyZWVuIENlcnRpZmljYXRlIEF1
dGhvcml0eTEWMBQGA1UEAwwNR3JlZW4gUm9vdCBDQTAgFw0xNzA3MTMwMzQ0Mzla
GA8yMDMwMTIzMTIzNTk1OVowXjELMAkGA1UEBhMCTk8xETAPBgNVBAoMCEdyZWVu
IEFTMSQwIgYDVQQLDBtHcmVlbiBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkxFjAUBgNV
BAMMDUdyZWVuIFJvb3QgQ0EwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB
AQCn6O3e1FQIQQdA1cBD1qvTniGHxhO/p889CE/B/o/lbMWJl+UndSbDKnMtNHxv
NY1AZmEFwOvpszhH+IsmNSzf3CQx/nLjhxDR96BXt/OxGv7HS/h7FG1zCFTrYzwM
ziKVXz/yb4muY9qAdDYhE+iRAVh3zMLyQr/rs2CnIe2IJH/r/wdBm5PIX2qOphoV
PLznDf0F/TzBHB0fVytAJ2KhfEhjwUXnLyDtkhxClORYcHq20oXFYdjNxjdrcjt/
r1WB1p3cEMnYDoHkXkATLyDoa0aBzohH3ThxPe8hzMBnzwr06T+onSYlLiMeoxEY
y9FwHJ59CbGkINyVFR1JzxutAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwIBBjAPBgNV
HRMBAf8EBTADAQH/MB0GA1UdDgQWBBRgk1Mvx88q1/MJKPY8rpxQ7JNj5TAfBgNV
HSMEGDAWgBRgk1Mvx88q1/MJKPY8rpxQ7JNj5TANBgkqhkiG9w0BAQUFAAOCAQEA
p3dxixrlWluHVAi/Bz7LmS/cDo1jlJWDGcmSgtXLW48fhlW8cAEdM0bsmd5rH8PC
et3vaauWWOxsb2xwgnGKf/A7gJDVZPqAJ7h7UGmYSzeZrb+iW5MiXpZEPFrPDPRi
Y0pvcqf2iR0JJj2PqIbUtLzdszjKwFkWjCAfiTUStC3A6d6T4Dl2MvyA29pEJv0B
MnSX+ESu/gWxNJYTNFZztJOlVVbRAVGdnFXnOFMoEk44cgyPvZFMRUg74Q0DX1hA
ydOgrLOJzq8nig+r7HJNQHcwazb9Mkaf7vnE9RcGD0vTiPWkLz2HnvUmdPDJ3Mut
2aeK03EVANNdn0xZPiRj9Q==
-----END CERTIFICATE-----

View File

@@ -1 +0,0 @@
cert_file: somefile

View File

@@ -1 +0,0 @@
insecure_skip_verify: true

View File

@@ -1 +0,0 @@
something_invalid: true

View File

@@ -1 +0,0 @@
key_file: somefile

View File

@@ -1,62 +0,0 @@
// Copyright 2016 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package config
import (
"crypto/tls"
"io/ioutil"
"reflect"
"testing"
"gopkg.in/yaml.v2"
)
// LoadTLSConfig parses the given YAML file into a tls.Config.
func LoadTLSConfig(filename string) (*tls.Config, error) {
content, err := ioutil.ReadFile(filename)
if err != nil {
return nil, err
}
cfg := TLSConfig{}
if err = yaml.UnmarshalStrict(content, &cfg); err != nil {
return nil, err
}
return NewTLSConfig(&cfg)
}
var expectedTLSConfigs = []struct {
filename string
config *tls.Config
}{
{
filename: "tls_config.empty.good.yml",
config: &tls.Config{},
}, {
filename: "tls_config.insecure.good.yml",
config: &tls.Config{InsecureSkipVerify: true},
},
}
func TestValidTLSConfig(t *testing.T) {
for _, cfg := range expectedTLSConfigs {
cfg.config.BuildNameToCertificate()
got, err := LoadTLSConfig("testdata/" + cfg.filename)
if err != nil {
t.Errorf("Error parsing %s: %s", cfg.filename, err)
}
if !reflect.DeepEqual(*got, *cfg.config) {
t.Fatalf("%v: unexpected config result: \n\n%v\n expected\n\n%v", cfg.filename, got, cfg.config)
}
}
}

View File

@@ -1,167 +0,0 @@
// Copyright 2015 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package expfmt
import (
"bytes"
"compress/gzip"
"io"
"io/ioutil"
"testing"
"github.com/matttproud/golang_protobuf_extensions/pbutil"
dto "github.com/prometheus/client_model/go"
)
var parser TextParser
// Benchmarks to show how much penalty text format parsing actually inflicts.
//
// Example results on Linux 3.13.0, Intel(R) Core(TM) i7-4700MQ CPU @ 2.40GHz, go1.4.
//
// BenchmarkParseText 1000 1188535 ns/op 205085 B/op 6135 allocs/op
// BenchmarkParseTextGzip 1000 1376567 ns/op 246224 B/op 6151 allocs/op
// BenchmarkParseProto 10000 172790 ns/op 52258 B/op 1160 allocs/op
// BenchmarkParseProtoGzip 5000 324021 ns/op 94931 B/op 1211 allocs/op
// BenchmarkParseProtoMap 10000 187946 ns/op 58714 B/op 1203 allocs/op
//
// CONCLUSION: The overhead for the map is negligible. Text format needs ~5x more allocations.
// Without compression, it needs ~7x longer, but with compression (the more relevant scenario),
// the difference becomes less relevant, only ~4x.
//
// The test data contains 248 samples.
// BenchmarkParseText benchmarks the parsing of a text-format scrape into metric
// family DTOs.
func BenchmarkParseText(b *testing.B) {
b.StopTimer()
data, err := ioutil.ReadFile("testdata/text")
if err != nil {
b.Fatal(err)
}
b.StartTimer()
for i := 0; i < b.N; i++ {
if _, err := parser.TextToMetricFamilies(bytes.NewReader(data)); err != nil {
b.Fatal(err)
}
}
}
// BenchmarkParseTextGzip benchmarks the parsing of a gzipped text-format scrape
// into metric family DTOs.
func BenchmarkParseTextGzip(b *testing.B) {
b.StopTimer()
data, err := ioutil.ReadFile("testdata/text.gz")
if err != nil {
b.Fatal(err)
}
b.StartTimer()
for i := 0; i < b.N; i++ {
in, err := gzip.NewReader(bytes.NewReader(data))
if err != nil {
b.Fatal(err)
}
if _, err := parser.TextToMetricFamilies(in); err != nil {
b.Fatal(err)
}
}
}
// BenchmarkParseProto benchmarks the parsing of a protobuf-format scrape into
// metric family DTOs. Note that this does not build a map of metric families
// (as the text version does), because it is not required for Prometheus
// ingestion either. (However, it is required for the text-format parsing, as
// the metric family might be sprinkled all over the text, while the
// protobuf-format guarantees bundling at one place.)
func BenchmarkParseProto(b *testing.B) {
b.StopTimer()
data, err := ioutil.ReadFile("testdata/protobuf")
if err != nil {
b.Fatal(err)
}
b.StartTimer()
for i := 0; i < b.N; i++ {
family := &dto.MetricFamily{}
in := bytes.NewReader(data)
for {
family.Reset()
if _, err := pbutil.ReadDelimited(in, family); err != nil {
if err == io.EOF {
break
}
b.Fatal(err)
}
}
}
}
// BenchmarkParseProtoGzip is like BenchmarkParseProto above, but parses gzipped
// protobuf format.
func BenchmarkParseProtoGzip(b *testing.B) {
b.StopTimer()
data, err := ioutil.ReadFile("testdata/protobuf.gz")
if err != nil {
b.Fatal(err)
}
b.StartTimer()
for i := 0; i < b.N; i++ {
family := &dto.MetricFamily{}
in, err := gzip.NewReader(bytes.NewReader(data))
if err != nil {
b.Fatal(err)
}
for {
family.Reset()
if _, err := pbutil.ReadDelimited(in, family); err != nil {
if err == io.EOF {
break
}
b.Fatal(err)
}
}
}
}
// BenchmarkParseProtoMap is like BenchmarkParseProto but DOES put the parsed
// metric family DTOs into a map. This is not happening during Prometheus
// ingestion. It is just here to measure the overhead of that map creation and
// separate it from the overhead of the text format parsing.
func BenchmarkParseProtoMap(b *testing.B) {
b.StopTimer()
data, err := ioutil.ReadFile("testdata/protobuf")
if err != nil {
b.Fatal(err)
}
b.StartTimer()
for i := 0; i < b.N; i++ {
families := map[string]*dto.MetricFamily{}
in := bytes.NewReader(data)
for {
family := &dto.MetricFamily{}
if _, err := pbutil.ReadDelimited(in, family); err != nil {
if err == io.EOF {
break
}
b.Fatal(err)
}
families[family.GetName()] = family
}
}
}

View File

@@ -1,435 +0,0 @@
// Copyright 2015 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package expfmt
import (
"io"
"net/http"
"reflect"
"sort"
"strings"
"testing"
"github.com/golang/protobuf/proto"
dto "github.com/prometheus/client_model/go"
"github.com/prometheus/common/model"
)
func TestTextDecoder(t *testing.T) {
var (
ts = model.Now()
in = `
# Only a quite simple scenario with two metric families.
# More complicated tests of the parser itself can be found in the text package.
# TYPE mf2 counter
mf2 3
mf1{label="value1"} -3.14 123456
mf1{label="value2"} 42
mf2 4
`
out = model.Vector{
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "mf1",
"label": "value1",
},
Value: -3.14,
Timestamp: 123456,
},
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "mf1",
"label": "value2",
},
Value: 42,
Timestamp: ts,
},
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "mf2",
},
Value: 3,
Timestamp: ts,
},
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "mf2",
},
Value: 4,
Timestamp: ts,
},
}
)
dec := &SampleDecoder{
Dec: &textDecoder{r: strings.NewReader(in)},
Opts: &DecodeOptions{
Timestamp: ts,
},
}
var all model.Vector
for {
var smpls model.Vector
err := dec.Decode(&smpls)
if err == io.EOF {
break
}
if err != nil {
t.Fatal(err)
}
all = append(all, smpls...)
}
sort.Sort(all)
sort.Sort(out)
if !reflect.DeepEqual(all, out) {
t.Fatalf("output does not match")
}
}
func TestProtoDecoder(t *testing.T) {
var testTime = model.Now()
scenarios := []struct {
in string
expected model.Vector
fail bool
}{
{
in: "",
},
{
in: "\x8f\x01\n\rrequest_count\x12\x12Number of requests\x18\x00\"0\n#\n\x0fsome_!abel_name\x12\x10some_label_value\x1a\t\t\x00\x00\x00\x00\x00\x00E\xc0\"6\n)\n\x12another_label_name\x12\x13another_label_value\x1a\t\t\x00\x00\x00\x00\x00\x00U@",
fail: true,
},
{
in: "\x8f\x01\n\rrequest_count\x12\x12Number of requests\x18\x00\"0\n#\n\x0fsome_label_name\x12\x10some_label_value\x1a\t\t\x00\x00\x00\x00\x00\x00E\xc0\"6\n)\n\x12another_label_name\x12\x13another_label_value\x1a\t\t\x00\x00\x00\x00\x00\x00U@",
expected: model.Vector{
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "request_count",
"some_label_name": "some_label_value",
},
Value: -42,
Timestamp: testTime,
},
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "request_count",
"another_label_name": "another_label_value",
},
Value: 84,
Timestamp: testTime,
},
},
},
{
in: "\xb9\x01\n\rrequest_count\x12\x12Number of requests\x18\x02\"O\n#\n\x0fsome_label_name\x12\x10some_label_value\"(\x1a\x12\t\xaeG\xe1z\x14\xae\xef?\x11\x00\x00\x00\x00\x00\x00E\xc0\x1a\x12\t+\x87\x16\xd9\xce\xf7\xef?\x11\x00\x00\x00\x00\x00\x00U\xc0\"A\n)\n\x12another_label_name\x12\x13another_label_value\"\x14\x1a\x12\t\x00\x00\x00\x00\x00\x00\xe0?\x11\x00\x00\x00\x00\x00\x00$@",
expected: model.Vector{
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "request_count_count",
"some_label_name": "some_label_value",
},
Value: 0,
Timestamp: testTime,
},
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "request_count_sum",
"some_label_name": "some_label_value",
},
Value: 0,
Timestamp: testTime,
},
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "request_count",
"some_label_name": "some_label_value",
"quantile": "0.99",
},
Value: -42,
Timestamp: testTime,
},
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "request_count",
"some_label_name": "some_label_value",
"quantile": "0.999",
},
Value: -84,
Timestamp: testTime,
},
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "request_count_count",
"another_label_name": "another_label_value",
},
Value: 0,
Timestamp: testTime,
},
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "request_count_sum",
"another_label_name": "another_label_value",
},
Value: 0,
Timestamp: testTime,
},
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "request_count",
"another_label_name": "another_label_value",
"quantile": "0.5",
},
Value: 10,
Timestamp: testTime,
},
},
},
{
in: "\x8d\x01\n\x1drequest_duration_microseconds\x12\x15The response latency.\x18\x04\"S:Q\b\x85\x15\x11\xcd\xcc\xccL\x8f\xcb:A\x1a\v\b{\x11\x00\x00\x00\x00\x00\x00Y@\x1a\f\b\x9c\x03\x11\x00\x00\x00\x00\x00\x00^@\x1a\f\b\xd0\x04\x11\x00\x00\x00\x00\x00\x00b@\x1a\f\b\xf4\v\x11\x9a\x99\x99\x99\x99\x99e@\x1a\f\b\x85\x15\x11\x00\x00\x00\x00\x00\x00\xf0\u007f",
expected: model.Vector{
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "request_duration_microseconds_bucket",
"le": "100",
},
Value: 123,
Timestamp: testTime,
},
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "request_duration_microseconds_bucket",
"le": "120",
},
Value: 412,
Timestamp: testTime,
},
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "request_duration_microseconds_bucket",
"le": "144",
},
Value: 592,
Timestamp: testTime,
},
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "request_duration_microseconds_bucket",
"le": "172.8",
},
Value: 1524,
Timestamp: testTime,
},
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "request_duration_microseconds_bucket",
"le": "+Inf",
},
Value: 2693,
Timestamp: testTime,
},
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "request_duration_microseconds_sum",
},
Value: 1756047.3,
Timestamp: testTime,
},
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "request_duration_microseconds_count",
},
Value: 2693,
Timestamp: testTime,
},
},
},
{
// The metric type is unset in this protobuf, which needs to be handled
// correctly by the decoder.
in: "\x1c\n\rrequest_count\"\v\x1a\t\t\x00\x00\x00\x00\x00\x00\xf0?",
expected: model.Vector{
&model.Sample{
Metric: model.Metric{
model.MetricNameLabel: "request_count",
},
Value: 1,
Timestamp: testTime,
},
},
},
}
for i, scenario := range scenarios {
dec := &SampleDecoder{
Dec: &protoDecoder{r: strings.NewReader(scenario.in)},
Opts: &DecodeOptions{
Timestamp: testTime,
},
}
var all model.Vector
for {
var smpls model.Vector
err := dec.Decode(&smpls)
if err == io.EOF {
break
}
if scenario.fail {
if err == nil {
t.Fatal("Expected error but got none")
}
break
}
if err != nil {
t.Fatal(err)
}
all = append(all, smpls...)
}
sort.Sort(all)
sort.Sort(scenario.expected)
if !reflect.DeepEqual(all, scenario.expected) {
t.Fatalf("%d. output does not match, want: %#v, got %#v", i, scenario.expected, all)
}
}
}
func testDiscriminatorHTTPHeader(t testing.TB) {
var scenarios = []struct {
input map[string]string
output Format
err error
}{
{
input: map[string]string{"Content-Type": `application/vnd.google.protobuf; proto="io.prometheus.client.MetricFamily"; encoding="delimited"`},
output: FmtProtoDelim,
},
{
input: map[string]string{"Content-Type": `application/vnd.google.protobuf; proto="illegal"; encoding="delimited"`},
output: FmtUnknown,
},
{
input: map[string]string{"Content-Type": `application/vnd.google.protobuf; proto="io.prometheus.client.MetricFamily"; encoding="illegal"`},
output: FmtUnknown,
},
{
input: map[string]string{"Content-Type": `text/plain; version=0.0.4`},
output: FmtText,
},
{
input: map[string]string{"Content-Type": `text/plain`},
output: FmtText,
},
{
input: map[string]string{"Content-Type": `text/plain; version=0.0.3`},
output: FmtUnknown,
},
}
for i, scenario := range scenarios {
var header http.Header
if len(scenario.input) > 0 {
header = http.Header{}
}
for key, value := range scenario.input {
header.Add(key, value)
}
actual := ResponseFormat(header)
if scenario.output != actual {
t.Errorf("%d. expected %s, got %s", i, scenario.output, actual)
}
}
}
func TestDiscriminatorHTTPHeader(t *testing.T) {
testDiscriminatorHTTPHeader(t)
}
func BenchmarkDiscriminatorHTTPHeader(b *testing.B) {
for i := 0; i < b.N; i++ {
testDiscriminatorHTTPHeader(b)
}
}
func TestExtractSamples(t *testing.T) {
var (
goodMetricFamily1 = &dto.MetricFamily{
Name: proto.String("foo"),
Help: proto.String("Help for foo."),
Type: dto.MetricType_COUNTER.Enum(),
Metric: []*dto.Metric{
&dto.Metric{
Counter: &dto.Counter{
Value: proto.Float64(4711),
},
},
},
}
goodMetricFamily2 = &dto.MetricFamily{
Name: proto.String("bar"),
Help: proto.String("Help for bar."),
Type: dto.MetricType_GAUGE.Enum(),
Metric: []*dto.Metric{
&dto.Metric{
Gauge: &dto.Gauge{
Value: proto.Float64(3.14),
},
},
},
}
badMetricFamily = &dto.MetricFamily{
Name: proto.String("bad"),
Help: proto.String("Help for bad."),
Type: dto.MetricType(42).Enum(),
Metric: []*dto.Metric{
&dto.Metric{
Gauge: &dto.Gauge{
Value: proto.Float64(2.7),
},
},
},
}
opts = &DecodeOptions{
Timestamp: 42,
}
)
got, err := ExtractSamples(opts, goodMetricFamily1, goodMetricFamily2)
if err != nil {
t.Error("Unexpected error from ExtractSamples:", err)
}
want := model.Vector{
&model.Sample{Metric: model.Metric{model.MetricNameLabel: "foo"}, Value: 4711, Timestamp: 42},
&model.Sample{Metric: model.Metric{model.MetricNameLabel: "bar"}, Value: 3.14, Timestamp: 42},
}
if !reflect.DeepEqual(got, want) {
t.Errorf("unexpected samples extracted, got: %v, want: %v", got, want)
}
got, err = ExtractSamples(opts, goodMetricFamily1, badMetricFamily, goodMetricFamily2)
if err == nil {
t.Error("Expected error from ExtractSamples")
}
if !reflect.DeepEqual(got, want) {
t.Errorf("unexpected samples extracted, got: %v, want: %v", got, want)
}
}

View File

@@ -1,2 +0,0 @@

View File

@@ -1,6 +0,0 @@
minimal_metric 1.234
another_metric -3e3 103948
# Even that:
no_labels{} 3
# HELP line for non-existing metric will be ignored.

View File

@@ -1,12 +0,0 @@
# A normal comment.
#
# TYPE name counter
name{labelname="val1",basename="basevalue"} NaN
name {labelname="val2",basename="base\"v\\al\nue"} 0.23 1234567890
# HELP name two-line\n doc str\\ing
# HELP name2 doc str"ing 2
# TYPE name2 gauge
name2{labelname="val2" ,basename = "basevalue2" } +Inf 54321
name2{ labelname = "val1" , }-Inf

View File

@@ -1,22 +0,0 @@
# TYPE my_summary summary
my_summary{n1="val1",quantile="0.5"} 110
decoy -1 -2
my_summary{n1="val1",quantile="0.9"} 140 1
my_summary_count{n1="val1"} 42
# Latest timestamp wins in case of a summary.
my_summary_sum{n1="val1"} 4711 2
fake_sum{n1="val1"} 2001
# TYPE another_summary summary
another_summary_count{n2="val2",n1="val1"} 20
my_summary_count{n2="val2",n1="val1"} 5 5
another_summary{n1="val1",n2="val2",quantile=".3"} -1.2
my_summary_sum{n1="val2"} 08 15
my_summary{n1="val3", quantile="0.2"} 4711
my_summary{n1="val1",n2="val2",quantile="-12.34",} NaN
# some
# funny comments
# HELP
# HELP
# HELP my_summary
# HELP my_summary

Some files were not shown because too many files have changed in this diff Show More