You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While configuring your Python client, you can use various system properties provided by Hazelcast to tune its clients. These properties can be set programmatically through `config.set_property` method or by using an environment variable.
415
+
416
+
The value of the any property will be:
417
+
418
+
* the programmatically configured value, if programmatically set,
419
+
* the environment variable value, if the environment variable is set,
420
+
* the default value, if none of the above is set.
421
+
422
+
See the following for an example client system property configuration:
If you set a property both programmatically and via an environment variable, the programmatically set value will be used.
450
+
451
+
See the [complete list](http://hazelcast.github.io/hazelcast-python-client/3.10/hazelcast.config.html#hazelcast.config.ClientProperties) of client system properties, along with their descriptions, which can be used to configure your Hazelcast Python client.
452
+
402
453
## 1.5. Basic Usage
403
454
404
455
Now that we have a working cluster and we know how to configure both our cluster and client, we can run a simple program to use a
@@ -1251,7 +1302,7 @@ The client executes each operation through the already established connection to
1251
1302
1252
1303
While sending the requests to the related members, the operations can fail due to various reasons. Read-only operations are retried by default. If you want to enable retrying for the other operations, you can set the `redo_operation` to `True`. See the [Enabling Redo Operation section](#53-enabling-redo-operation).
1253
1304
1254
-
You can set a timeout for retrying the operations sent to a member. This can be provided by using the property `hazelcast.client.invocation.timeout.seconds`in`ClientConfig.properties`. The client will retry an operation within this given period, of course, if it is a read-only operation or you enabled the `redo_operation` as stated in the above paragraph. This timeout value is important when there is a failure resulted by either of the following causes:
1305
+
You can set a timeout for retrying the operations sent to a member. This can be provided by using the property `hazelcast.client.invocation.timeout.seconds`via`ClientConfig.set_property` method. The client will retry an operation within this given period, of course, if it is a read-only operation or you enabled the `redo_operation` as stated in the above paragraph. This timeout value is important when there is a failure resulted by either of the following causes:
1255
1306
1256
1307
* Member throws an exception.
1257
1308
@@ -1261,6 +1312,14 @@ You can set a timeout for retrying the operations sent to a member. This can be
1261
1312
1262
1313
When a connection problem occurs, an operation is retried if it is certain that it has not run on the member yet or if it is idempotent such as a read-only operation, i.e., retrying does not have a side effect. If it is not certain whether the operation has run on the member, then the non-idempotent operations are not retried. However, as explained in the first paragraph of this section, you can force all the client operations to be retried (`redo_operation`) when there is a connection failure between the client and member. But in this case, you should know that some operations may run multiple times causing conflicts. For example, assume that your client sent a `queue.offer` operation to the member and then the connection is lost. Since there will be no response for this operation, you will not know whether it has run on the member or not. If you enabled `redo_operation`, it means this operation may run again, which may cause two instances of the same object in the queue.
1263
1314
1315
+
When invocation is being retried, the client may wait some time before it retries again. This duration can be configured using the following property:
Most of the distributed data structures are supported by the Python client. In this chapter, you will learn how to use these distributed data structures.
In this example, the code creates a list with the values greater than or equal to "27".
1981
2040
1982
-
# 7.8. Logging
2041
+
## 7.8. Performance
2042
+
2043
+
### 7.8.1. Near Cache
2044
+
2045
+
Map entries in Hazelcast are partitioned across the cluster members. Hazelcast clients do not have local data at all. Suppose you read the key `k` a number of times from a Hazelcast client and `k` is owned by a member in your cluster. Then each `map.get(k)` will be a remote operation, which creates a lot of network trips. If you have a map that is mostly read, then you should consider creating a local Near Cache, so that reads are sped up and less network traffic is created.
2046
+
2047
+
These benefits do not come for free, please consider the following trade-offs:
2048
+
2049
+
- Clients with a Near Cache will have to hold the extra cached data, which increases memory consumption.
2050
+
- If invalidation is enabled and entries are updated frequently, then invalidations will be costly.
2051
+
- Near Cache breaks the strong consistency guarantees; you might be reading stale data.
2052
+
2053
+
Near Cache is highly recommended for maps that are mostly read.
2054
+
2055
+
#### 7.8.1.1. Configuring Near Cache
2056
+
2057
+
The following snippet show how a Near Cache is configured in the Python client, presenting all available values for each element:
2058
+
2059
+
```python
2060
+
from hazelcast.config import NearCacheConfig, IN_MEMORY_FORMAT, EVICTION_POLICY
Following are the descriptions of all configuration elements:
2076
+
2077
+
-`in_memory_format`: Specifies in which format data will be stored in your Near Cache. Note that a map’s in-memory format can be different from that of its Near Cache. Available values are as follows:
2078
+
-`BINARY`: Data will be stored in serialized binary format (default value).
2079
+
-`OBJECT`: Data will be stored in deserialized format.
2080
+
-`invalidate_on_change`: Specifies whether the cached entries are evicted when the entries are updated or removed. Its default value is `True`.
2081
+
-`time_to_live_seconds`: Maximum number of seconds for each entry to stay in the Near Cache. Entries that are older than this period are automatically evicted from the Near Cache. Regardless of the eviction policy used, `time_to_live_seconds` still applies. Any non-negative number can be assigned. Its default value is `None`. `None` means infinite.
2082
+
-`max_idle_seconds`: Maximum number of seconds each entry can stay in the Near Cache as untouched (not read). Entries that are not read more than this period are removed from the Near Cache. Any non-negative number can be assigned. Its default value is `None`. `None` means infinite.
2083
+
-`eviction_policy`: Eviction policy configuration. Available values are as follows:
2084
+
-`LRU`: Least Recently Used (default value).
2085
+
-`LFU`: Least Frequently Used.
2086
+
-`NONE`: No items are evicted and the `eviction_max_size` property is ignored. You still can combine it with `time_to_live_seconds` and `max_idle_seconds` to evict items from the Near Cache.
2087
+
-`RANDOM`: A random item is evicted.
2088
+
-`eviction_max_size`: Maximum number of entries kept in the memory before eviction kicks in.
2089
+
-`eviction_sampling_count`: Number of random entries that are evaluated to see if some of them are already expired. If there are expired entries, those are removed and there is no need for eviction.
2090
+
-`eviction_sampling_pool_size`: Size of the pool for eviction candidates. The pool is kept sorted according to eviction policy. The entry with the highest score is evicted.
2091
+
2092
+
#### 7.8.1.2. Near Cache Example for Map
2093
+
2094
+
The following is an example configuration for a Near Cache defined in the `mostly-read-map` map. According to this configuration, the entries are stored as `OBJECT`'s in this Near Cache and eviction starts when the count of entries reaches `5000`; entries are evicted based on the `LRU` (Least Recently Used) policy. In addition, when an entry is updated or removed on the member side, it is eventually evicted on the client side.
In the scope of Near Cache, eviction means evicting (clearing) the entries selected according to the given `eviction_policy` when the specified `eviction_max_size` has been reached.
2109
+
2110
+
The `eviction_max_size` defines the entry count when the Near Cache is full and determines whether the eviction should be triggered.
2111
+
2112
+
Once the eviction is triggered, the configured `eviction_policy` determines which, if any, entries must be evicted.
2113
+
2114
+
#### 7.8.1.4. Near Cache Expiration
2115
+
2116
+
Expiration means the eviction of expired records. A record is expired:
2117
+
2118
+
- If it is not touched (accessed/read) for `max_idle_seconds`
2119
+
-`time_to_live_seconds` passed since it is put to Near Cache
2120
+
2121
+
The actual expiration is performed when a record is accessed: it is checked if the record is expired or not. If it is expired, it is evicted and `KeyError` is raised to the caller.
2122
+
2123
+
#### 7.8.1.5. Near Cache Invalidation
2124
+
2125
+
Invalidation is the process of removing an entry from the Near Cache when its value is updated or it is removed from the original map (to prevent stale reads). See the [Near Cache Invalidation section](https://docs.hazelcast.org/docs/latest/manual/html-single/#near-cache-invalidation) in the Hazelcast IMDG Reference Manual.
2126
+
2127
+
## 7.9. Logging
1983
2128
1984
2129
In this chapter, you will learn about the different ways of configuring the logging for the Python client.
1985
2130
1986
-
## 7.8.1 Logging Configuration
2131
+
###7.9.1 Logging Configuration
1987
2132
1988
2133
Hazelcast Python client allows you to configure the logging through the root logger via the `logging` module.
Although you can not change the logging levels used within the Hazelcast Python client, you can specify a logging level that will be used to threshold the logs that are at least as severe as your specified level using the `level` argument.
2084
2229
@@ -2101,7 +2246,7 @@ Below is an example of this configuration.
2101
2246
logging.basicConfig(level=logging.INFO)
2102
2247
```
2103
2248
2104
-
### Setting Custom Handlers
2249
+
####Setting Custom Handlers
2105
2250
2106
2251
Apart from `FileHandler` and `StreamHandler`, custom handlers can be added to the root logger using the `handlers` attribute.
0 commit comments