Skip to content

Commit 62d8793

Browse files
authored
Fixes typos and british spellings (#1139)
1 parent 1be21db commit 62d8793

34 files changed

+36
-36
lines changed

deploy-manage/api-keys/elasticsearch-api-keys.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ Refer to the [Create API key](https://www.elastic.co/docs/api/doc/elasticsearch/
4040
Refer to the [Create cross-cluster API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key) documentation to learn more about creating cross-cluster API keys.
4141

4242

43-
## Update an API key [udpate-api-key]
43+
## Update an API key [update-api-key]
4444

4545
To update an API key, go to the **API Keys** management page using the navigation menu or the [global search field](../../explore-analyze/find-and-organize/find-apps-and-objects.md), and then click on the name of the key. You cannot update the name or the type of API key.
4646

deploy-manage/autoscaling/trained-model-autoscaling.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ $$$nlp-model-adaptive-resources$$$
3737

3838
Model allocations are independent units of work for NLP tasks. If you set the numbers of threads and allocations for a model manually, they remain constant even when not all the available resources are fully used or when the load on the model requires more resources. Instead of setting the number of allocations manually, you can enable adaptive allocations to set the number of allocations based on the load on the process. This can help you to manage performance and cost more easily. (Refer to the [pricing calculator](https://cloud.elastic.co/pricing) to learn more about the possible costs.)
3939

40-
When adaptive allocations are enabled, the number of allocations of the model is set automatically based on the current load. When the load is high, a new model allocation is automatically created. When the load is low, a model allocation is automatically removed. You can explicitely set the minimum and maximum number of allocations; autoscaling will occur within these limits.
40+
When adaptive allocations are enabled, the number of allocations of the model is set automatically based on the current load. When the load is high, a new model allocation is automatically created. When the load is low, a model allocation is automatically removed. You can explicitly set the minimum and maximum number of allocations; autoscaling will occur within these limits.
4141

4242
::::{note}
4343
If you set the minimum number of allocations to 1, you will be charged even if the system is not using those resources.
@@ -68,7 +68,7 @@ You can enable adaptive resources for your models when starting or updating the
6868

6969
You can choose from three levels of resource usage for your trained model deployment; autoscaling will occur within the selected level’s range.
7070

71-
Refer to the tables in the [Model deployment resource matrix](#model-deployment-resource-matrix) section to find out the setings for the level you selected.
71+
Refer to the tables in the [Model deployment resource matrix](#model-deployment-resource-matrix) section to find out the settings for the level you selected.
7272

7373
:::{image} /deploy-manage/images/machine-learning-ml-nlp-deployment-id-elser-v2.png
7474
:alt: ELSER deployment with adaptive resources enabled.

deploy-manage/deploy/cloud-on-k8s/advanced-elasticsearch-node-scheduling.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ spec:
8080
topologyKey: kubernetes.io/hostname
8181
```
8282

83-
This is ECK default behaviour if you don’t specify any `affinity` option. To explicitly disable the default behaviour, set an empty affinity object:
83+
This is ECK default behavior if you don’t specify any `affinity` option. To explicitly disable the default behavior, set an empty affinity object:
8484

8585
```yaml
8686
apiVersion: elasticsearch.k8s.elastic.co/v1

deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-linkerd.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ podTemplate:
5252
linkerd.io/inject: enabled
5353
```
5454
55-
If automatic sidecar injection is enabled and [auto mounting of service account tokens](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server) is not disabled on your Kubernetes cluster, examples defined elsewhere in the ECK documentation will continue to work under Linkerd without requiring any modifications. However, as the default behaviour of ECK is to enable TLS for {{es}}, {{kib}} and APM Server resources, you will not be able to view detailed traffic information from Linkerd dashboards and command-line utilities. The following sections illustrate the optional configuration necessary to enhance the integration of {{stack}} applications with Linkerd.
55+
If automatic sidecar injection is enabled and [auto mounting of service account tokens](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server) is not disabled on your Kubernetes cluster, examples defined elsewhere in the ECK documentation will continue to work under Linkerd without requiring any modifications. However, as the default behavior of ECK is to enable TLS for {{es}}, {{kib}} and APM Server resources, you will not be able to view detailed traffic information from Linkerd dashboards and command-line utilities. The following sections illustrate the optional configuration necessary to enhance the integration of {{stack}} applications with Linkerd.
5656
5757
### {{es}} [k8s-service-mesh-linkerd-elasticsearch]
5858

deploy-manage/deploy/cloud-on-k8s/pod-disruption-budget.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ A [Pod Disruption Budget](https://kubernetes.io/docs/tasks/run-application/confi
1212

1313
ECK manages a default PDB per {{es}} resource. It allows one {{es}} Pod to be taken down, as long as the cluster has a `green` health. Single-node clusters are not considered highly available and can always be disrupted.
1414

15-
In the {{es}} specification, you can change the default behaviour as follows:
15+
In the {{es}} specification, you can change the default behavior as follows:
1616

1717
```yaml
1818
apiVersion: elasticsearch.k8s.elastic.co/v1

deploy-manage/deploy/cloud-on-k8s/pod-prestop-hook.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,5 +39,5 @@ The pre-stop lifecycle hook also tries to gracefully shut down the {{es}} node i
3939
4040
This is done on a best effort basis. In particular requests to an {{es}} cluster already in the process of shutting down might fail if the Kubernetes service has already been removed. The script allows for `PRE_STOP_MAX_DNS_ERRORS` which default to 2 before giving up.
4141

42-
When using local persistent volumes a different behaviour might be desirable because the {{es}} node’s associated storage will not be available anymore on the new Kubernetes node. `PRE_STOP_SHUTDOWN_TYPE` allows to override the default shutdown type to one of the [possible values](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-shutdown-put-node). Be aware that setting it to anything other than `restart` might mean that the pre-stop hook will run longer than `terminationGracePeriodSeconds` of the Pod while moving data out of the terminating Pod and will not be able to complete unless you also adjust that value in the `podTemplate`.
42+
When using local persistent volumes a different behavior might be desirable because the {{es}} node’s associated storage will not be available anymore on the new Kubernetes node. `PRE_STOP_SHUTDOWN_TYPE` allows to override the default shutdown type to one of the [possible values](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-shutdown-put-node). Be aware that setting it to anything other than `restart` might mean that the pre-stop hook will run longer than `terminationGracePeriodSeconds` of the Pod while moving data out of the terminating Pod and will not be able to complete unless you also adjust that value in the `podTemplate`.
4343

deploy-manage/deploy/cloud-on-k8s/troubleshooting-beats.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ When `kibanaRef` is specified, Beat tries to connect to the {{kib}} instance. If
1717

1818
When `kubectl` is used to modify a resource, it calculates the diff between the user applied and the existing configuration. This diff has special [semantics](https://tools.ietf.org/html/rfc7396#section-1) that forces the removal of keys if they have special values. For example, if the user-applied configuration contains `some_key: null` (or equivalent `some_key: ~`), this is interpreted as an instruction to remove `some_key`. In Beats configurations, this is often a problem when it comes to defining things like [processors](beats://reference/filebeat/add-cloud-metadata.md). To avoid this problem:
1919

20-
* Use `some_key: {}` (empty map) or `some_key: []` (empty array) instead of `some_key: null` if doing so does not affect the behaviour. This might not be possible in all cases as some applications distinguish between null values and empty values and behave differently.
20+
* Use `some_key: {}` (empty map) or `some_key: []` (empty array) instead of `some_key: null` if doing so does not affect the behavior. This might not be possible in all cases as some applications distinguish between null values and empty values and behave differently.
2121
* Instead of using `config` to define configuration inline, use `configRef` and store the configuration in a Secret.
2222

2323

deploy-manage/distributed-architecture.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,6 @@ The topics in this section provides information about the architecture of {{es}}
2121
* [Reading and writing documents](distributed-architecture/reading-and-writing-documents.md): Learn how {{es}} replicates read and write operations across shards and shard copies.
2222
* [Shard allocation, relocation, and recovery](distributed-architecture/shard-allocation-relocation-recovery.md): Learn how {{es}} allocates and balances shards across nodes.
2323
* [Shard allocation awareness](distributed-architecture/shard-allocation-relocation-recovery/shard-allocation-awareness.md): Learn how to use custom node attributes to distribute shards across different racks or availability zones.
24-
* [Disocvery and cluster formation](distributed-architecture/discovery-cluster-formation.md): Learn about the cluster formation process including voting, adding nodes and publishing the cluster state.
24+
* [Discovery and cluster formation](distributed-architecture/discovery-cluster-formation.md): Learn about the cluster formation process including voting, adding nodes and publishing the cluster state.
2525
* [Shard request cache](/deploy-manage/distributed-architecture/shard-request-cache.md): Learn how {{es}} caches search requests to improve performance.
2626
* [{{kib}} task management](distributed-architecture/kibana-tasks-management.md): Learn how {{kib}} runs background tasks and distribute work across multiple {{kib}} instances to be persistent and scale with your deployment.

deploy-manage/distributed-architecture/reading-and-writing-documents.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ In the case that the primary itself fails, the node hosting the primary will sen
4646
Once the operation has been successfully performed on the primary, the primary has to deal with potential failures when executing it on the replica shards. This may be caused by an actual failure on the replica or due to a network issue preventing the operation from reaching the replica (or preventing the replica from responding). All of these share the same end result: a replica which is part of the in-sync replica set misses an operation that is about to be acknowledged. In order to avoid violating the invariant, the primary sends a message to the master requesting that the problematic shard be removed from the in-sync replica set. Only once removal of the shard has been acknowledged by the master does the primary acknowledge the operation. Note that the master will also instruct another node to start building a new shard copy in order to restore the system to a healthy state.
4747

4848
$$$demoted-primary$$$
49-
While forwarding an operation to the replicas, the primary will use the replicas to validate that it is still the active primary. If the primary has been isolated due to a network partition (or a long GC) it may continue to process incoming indexing operations before realising that it has been demoted. Operations that come from a stale primary will be rejected by the replicas. When the primary receives a response from the replica rejecting its request because it is no longer the primary then it will reach out to the master and will learn that it has been replaced. The operation is then routed to the new primary.
49+
While forwarding an operation to the replicas, the primary will use the replicas to validate that it is still the active primary. If the primary has been isolated due to a network partition (or a long GC) it may continue to process incoming indexing operations before realizing that it has been demoted. Operations that come from a stale primary will be rejected by the replicas. When the primary receives a response from the replica rejecting its request because it is no longer the primary then it will reach out to the master and will learn that it has been replaced. The operation is then routed to the new primary.
5050

5151
::::{admonition} What happens if there are no replicas?
5252
This is a valid scenario that can happen due to index configuration or simply because all the replicas have failed. In that case the primary is processing operations without any external validation, which may seem problematic. On the other hand, the primary cannot fail other shards on its own but request the master to do so on its behalf. This means that the master knows that the primary is the only single good copy. We are therefore guaranteed that the master will not promote any other (out-of-date) shard copy to be a new primary and that any operation indexed into the primary will not be lost. Of course, since at that point we are running with only single copy of the data, physical hardware issues can cause data loss. See [Active shards](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-create) for some mitigation options.

deploy-manage/distributed-architecture/shard-request-cache.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ Requests where `size` is greater than 0 will not be cached even if the request c
8585

8686
## Cache key [_cache_key]
8787

88-
A hash of the whole JSON body is used as the cache key. This means that if the JSON changes — for instance if keys are output in a different order — then the cache key will not be recognised.
88+
A hash of the whole JSON body is used as the cache key. This means that if the JSON changes — for instance if keys are output in a different order — then the cache key will not be recognized.
8989

9090
::::{tip}
9191
Most JSON libraries support a *canonical* mode which ensures that JSON keys are always emitted in the same order. This canonical mode can be used in the application to ensure that a request is always serialized in the same way.

0 commit comments

Comments
 (0)