Apache large file download slow in windows server 2012






















Description: The "shards" parameter does not have a corresponding whitelist mechanism, so it can request any URL. Mitigation: Upgrade to Apache Solr 7. Furthermore, this release includes Apache Lucene 7. Description: The details of this vulnerability were reported to the Apache Security mailing list. See [1] for more details. Mitigation: Users are advised to upgrade to either Solr 6.

Once upgrade is complete, no other steps are required. Those releases disable external entities in anonymous XML files passed through this request parameter. If users are unable to upgrade to Solr 6. Alternatively, if Solr instances are only used locally without access to public internet, the vulnerability cannot be used directly, so it may not be required to update, and instead reverse proxies or Solr client applications should be guarded to not allow end users to inject dataConfig request parameters.

Please refer to [2] on how to correctly secure Solr servers. The Apache Solr Reference Guide for 7. Description: Apache Solr uses Apache Tika for parsing binary file types such as doc, xls, pdf etc. A malicious user could inject arbitrary code into a MATLAB file that would be executed when the object is deserialized. Mitigation: Users are advised to upgrade to either Solr 5.

Solr 5. RunExecutableListener has been disabled by default can be enabled by -Dsolr. Furthermore, this release includes Apache Lucene 5. Fix for a bug where Solr was attempting to load the same core twice Error message: "Lock held by this virtual machine".

Description: The details of this vulnerability were reported on public mailing lists. It can also be used as Blind XXE using ftp wrapper in order to read arbitrary local files from the solr server. The second vulnerability relates to remote code execution using the RunExecutableListener available on all affected versions of Solr. At the time of the above report, this was a 0-day vulnerability with a working exploit affecting the versions of Solr mentioned in the previous section.

However, mitigation steps were announced to protect Solr users the same day. This will disallow any changes to be made to your configurations via the Config API. This is a key factor in this vulnerability, since it allows GET requests to add the RunExecutableListener to your config. For example, adding the following to the solrconfig. Critical Security Update: Fix for CVE which is a working 0-day exploit reported on the public mailing list.

Auto-scaling: Solr can now move replicas automatically when a new node is added or an existing node is removed using the auto scaling policy framework introduced in 7. Auto-scaling: The 'autoAddReplicas' feature which was limited to shared file systems is now available for all file systems. It has been ported to use the new autoscaling framework internally.

Auto-scaling: New set-trigger, remove-trigger, set-listener, remove-listener, suspend-trigger, resume-trigger APIs. Furthermore, this is the first time Solr has out of the box support for polygons.

Expanded support for statistical stream evaluators such as various distributions, rank correlations, distances and more. Please secure your Solr servers since a zero-day exploit has been reported on a public mailing list.

This has been assigned a public CVE CVE which we will reference in future communication about resolution and mitigation steps. Until fixes are available, all Solr users are advised to restart their Solr instances with the system property -Ddisable.

This will disallow any changes to be made to configurations via the Config API. This is a key factor in this vulnerability, since it allows GET requests to add the RunExecutableListener to the config. This is sufficient to protect you from this type of attack, but means you cannot use the edit capabilities of the Config API until the other fixes described below are in place.

We will also determine mitigation steps for users on earlier versions, which may include a 6. The RunExecutableListener will be removed in 7.

It was previously used by Solr for index replication but has been replaced and is no longer needed. The 7. Message "Lock held by this virtual machine" during startup. Solr is trying to start some cores twice. This 1,page PDF is the definitive guide to Solr. This version adds documentation for new features of Solr, plus detailed information about changes and deprecations you should know about when upgrading from Solr 6.

Replica Types - Solr 7 supports different replica types, which handle updates differently. Solr can now allocate new replicas to nodes using a new auto scaling policy framework. This framework will in future releases enable Solr to move shards around based on load, disk etc.

Streaming Expressions adds a new statistical programming syntax for the statistical analysis of sql queries, random samples, time series and graph result sets. Analytics Component version 2. CVE Security vulnerability in kerberos delegation token functionality. Solr's Kerberos plugin can be configured to use delegation tokens, which allows an application to reuse the authentication of an end-user or another application. Firstly, access to the security configuration can be leaked to users other than the solr super user.

Solr 6. SolrJmxReporter is broken on core reload. This resulted in some or most metrics not being reported via JMX after core reloads, depending on timing. Furthermore, this release includes Apache Lucene 6. Description: Solr uses a PKI based mechanism to secure inter-node communication when security is enabled. It is possible to create a specially crafted node name that does not exist as part of the cluster and point it to a malicious node.

This can trick the nodes in cluster to believe that the malicious node is a member of the cluster. Users who only use SSL without basic authentication or those who use Kerberos are not affected. CartesianProductStream, which turns a single tuple with a multi-valued field into N tuples, one for each value in the multi-valued field. Fixed, and enhanced the generated query to not pollute the queryCache.

In-place updates to numeric docValues fields single valued, non-stored, non-indexed supported using atomic update syntax.

A new significantTerms Streaming Expression that is able to extract the significant terms in an index.

Metrics API now supports non-numeric metrics version, disk type, component state, system properties The DirectUpdateHandler2 now implements MetricsProducer and exposes stats via the metrics api and configured reporters. MMapDirectoryFactory now supports "preload" option to ask mapped pages to be loaded into physical memory on init.

Javadocs and Changes. Fixed: Serious performance degradation in Solr 6. IndexWriter metrics collection turned off by default, directory level metrics collection completely removed until a better design is found. However, Solr did not validate the file name, hence it was possible to craft a special request involving path traversal, leaving any file readable to the Solr server process exposed.

Added "param" query type to facet domain filter specification to obtain filters via query parameters. Any facet command can be filtered using a new parameter filter.

A new highlighter: The Unified Highlighter. Try it via hl. It's the highest performing highlighter, especially for large documents. Please use this new highlighter and report issues since it will likely become the default one day.

Leading wildcard in complexphrase query parser are now accepted and optimized with the ReversedWildcardFilterFactory when it's provided. A new document processor 'SkipExistingDocumentsProcessor' that skips duplicate inserts and ignores updates to missing docs. FieldCache information fetched via the mbeans handler or seen via the UI now displays the total size used. So can you come back to us with a solution or can you provide us an Installation Guide clarifying the enablement for the proxy.

NTLM is distributed as a plugin due to its licensing. I have downloaded the newer version In previous version Edgar Renderer 3. It seems the email address support markv. Please can confirm if it is correct. E-mail is correct. Can you give me some advices for this installation? Otherwise please email support arelle. Have some problems with ArelleCmdLine on Windows machine.

Everything worked ok. When I execute:. The plugins that are configured by GUI once also applied the same configuration to command line operation. However that caused production problems and as a result command line operation no longer reads the GUI configuration and needs plugins specified on the command line, e. The latest Linux and Redhat version of Arelle do not open in Ubuntu May I please ask if you can correct this issue? Your email address will not be published.

Save my name, email, and website in this browser for the next time I comment. License Arelle is licensed under the Apache License, Version 2.

Change log Arelle project history on GitHub. Download us site eu cn sites Source: Python 3. Contact and support Contact users group: Arelle-Users Group , leave a comment below, or on the blog. Print for later Tell a friend. November 21, at pm. Manolo says:. November 29, at am. Xavi Balaguer says:. GuoHui Chen says:. May 3, at pm. July 17, at pm. Murray says:. July 26, at am. Harold Kinds says:. August 17, at am. August 17, at pm. Amy says:.

January 10, at am. March 25, at am. March 27, at pm. Ryosuke Fujita says:. April 13, at am. June 1, at am. June 4, at pm. Mauricio Ahumada says:. June 5, at am. Michael Mason says:. July 26, at pm. July 29, at am. Chao Li says:. July 31, at am. July 31, at pm. Jake says:. August 27, at am. September 6, at am. Vladimir says:. February 19, at am. February 19, at pm. Jarkko R says:. March 28, at am. April 10, at pm. Dave Dansing says:. May 7, at pm. May 19, at am.

Nils Wilhelm says:. May 10, at pm. Revathy Ramanan says:. May 30, at am. Jay says:. July 12, at am. August 2, at pm. August 13, at am. June 24, at am. August 13, at pm. August 14, at am. Mark says:. September 20, at am. September 30, at am. August 19, at am. Javi Mora says:. March 29, at am. Andres Bustamante says:. August 20, at pm. The database HA futures can be leveraged to move the Flume agent to another host.

These conventions for alias names are used in the component-specific examples above, to keep the names short and consistent across all examples. Flume 1.

Apache Flume is a top level project at the Apache Software Foundation. Connected to localhost. Hello world! NetcatSource: Created serverSocket:sun. For example: a1. System property Environment variable Description javax. A comma , separated list. Excluded protocols will be excluded from this list if provided. Excluded cipher suites will be excluded from this list if provided.

Property Name Default Description channels — type — The component type name, needs to be avro bind — hostname or IP address to listen on port — Port to bind to threads — Maximum number of worker threads to spawn selector.

The compression-type must match the compression-type of matching AvroSource ssl false Set this to true to enable SSL encryption. If not specified here, then the global keystore will be used if defined, otherwise configuration error. If not specified here, then the global keystore password will be used if defined, otherwise configuration error.

If not specified here, then the global keystore type will be used if defined, otherwise the default is JKS. SSLv3 will always be excluded in addition to the protocols specified. The enabled protocols will be the included protocols without the excluded protocols.

If included-protocols is empty, it includes every supported protocols. The enabled cipher suites will be the included cipher suites without the excluded cipher suites. If included-cipher-suites is empty, it includes every supported cipher suites. Example for agent named a1: a1. Property Name Default Description channels — type — The component type name, needs to be thrift bind — hostname or IP address to listen on port — Port to bind to threads — Maximum number of worker threads to spawn selector.

In kerberos mode, agent-principal and agent-keytab are required for successful authentication. The Thrift source in secure mode, will accept connections only from Thrift clients that have kerberos enabled and are successfully authenticated to the kerberos KDC. Property Name Default Description channels — type — The component type name, needs to be exec command — The command to execute shell — A shell invocation used to run the command.

Required only for commands relying on shell features like wildcards, back ticks, pipes etc. See below. Durable subscription can only be used with destinationType topic. Required for durable subscriptions. BytesMessage: Bytes of message are copied to body of the FlumeEvent. Cannot convert more than 2GB of data per message.

TextMessage: Text of message is converted to a byte array and copied to the body of the FlumeEvent. The default converter uses UTF-8 by default but this is configurable. Please note: There are no component level configuration parameters for JMS Source unlike in case of other components.

Flume tries to detect these problem conditions and will fail loudly if they are violated: If a file is written to after being placed into the spooling directory, Flume will print an error to its log file and stop processing. If a file name is reused at a later time, Flume will print an error to its log file and stop processing. Property Name Default Description channels — type — The component type name, needs to be spooldir.

It can used together with ignorePattern. If a file matches both ignorePattern and includePattern regex, the file is ignored. It can used together with includePattern.

If this path is not an absolute path, then it is interpreted as relative to the spoolDir. The new tracker file name is derived from the ingested one plus the fileSuffix. In case of oldest and youngest , the last modified time of the files will be used to compare the files.

In case of a tie, the file with smallest lexicographical order will be consumed first. In case of random any file will be picked randomly. The source will start at a low backoff and increase it exponentially each time the channel throws a ChannelException, upto the value specified by this parameter.

FAIL : Throw an exception and fail to parse the file. Defaults to parsing each line as an event. The class specified must implement EventDeserializer. Use deserializer. Property Name Default Description deserializer.

If a line exceeds this length, it is truncated, and the remaining characters on the line will appear in a subsequent event. TwitterSource a1. Property Name Default Description channels — type — The component type name, needs to be org.

KafkaSource kafka. Setting the same id in multiple sources or agents indicates that they are part of the same consumer group kafka. This property has higher priority than kafka. Wait period will reduce aggressive pinging of an empty Kafka Topic. One second is ideal for ingestion use cases but a lower value may be required for low latency operations with interceptors.

Five seconds is ideal for ingestion use cases but a lower value may be required for low latency operations with interceptors. Set to true to read events as the Flume Avro binary format. Used in conjunction with the same property on the KafkaSink or with the parseAsFlumeEvent property on the Kafka Channel this will preserve any Flume headers sent on the producing side. Care should be taken if combining with the Kafka Sink topicHeader property so as to avoid sending the message back to the same topic in a loop.

See below for additional info on secure setup. Any consumer property supported by Kafka can be used. The only requirement is to prepend the property name with the prefix kafka.

For example: kafka. KafkaSource tier1. KafkaSource a1. Property Name Default Description channels — type — The component type name, needs to be netcatudp bind — Host name or IP address to bind to port — Port to bind to remoteAddressHeader — selector.

Property Name Default Description channels — type — The component type name, needs to be seq selector. A spaced separated list of fields to include is allowed as well. Currently, the following fields can be included: priority, version, timestamp, hostname. This allows for interceptors and channel selectors to customize routing logic based on the IP address of the client. This allows for interceptors and channel selectors to customize routing logic based on the host name of the client.

Retrieving the host name may involve a name service reverse lookup which may affect the performance. For example, a syslog TCP source for agent named a1: a1. This allows for interceptors and channel selectors to customize routing logic based on the incoming port.

Using the default is usually fine. Provided for performance tuning. Mina will spawn 2 request-processing threads per detected CPU, which is often reasonable.

For example, a multiport syslog TCP source for agent named a1: a1. Property Name Default Description type The component type name, needs to be http port — The port the source should bind to.

If SSL is enabled but the keystore is not specified here, then the global keystore will be used if defined, otherwise configuration error. If SSL is enabled but the keystore password is not specified here, then the global keystore password will be used if defined, otherwise configuration error.

QueuedThreadPool will only be used if at least one property of this class is set. HttpConfiguration SslContextFactory. SslContextFactory only applicable when ssl is set to true.

Deprecated value will be overwritten with the new one. An example http source for agent named a1: a1. RestHandler a1. BlobHandler handler. Property Name Default Description type — The component type name, needs to be org. StressSource size Payload size of each Event. Unit: byte maxTotalEvents -1 Maximum number of Events to be sent maxSuccessfulEvents -1 Maximum number of Events successfully sent batchSize 1 Number of Events to be sent in one batch maxEventsPerSecond 0 When set to an integer greater than zero, enforces a rate limiter onto the source.

Example for agent named a1 : a1. StressSource a1. Note The reliability semantics of Flume 1. AvroLegacySource a1. ThriftLegacySource a1. MySource a1. ScribeSource a1. Arbitrary header names are supported. Name Default Description channel — type — The component type name, needs to be hive hive.

May contain escape sequences. This setting configures the number of desired transactions per Transaction Batch. Data from all transactions in a single batch end up in a single file. Flume will write a maximum of batchSize events in each transaction in the batch. This setting in conjunction with batchSize provides control over the size of each file. Note that eventually Hive will transparently compact these files into larger files.

Set this value to 0 to disable heartbeats. If this number is exceeded, the least recently used connection is closed. Choice of serializer depends upon the format of the data in the event. Name Default Description serializer. Specified as a comma separated list no spaces of hive table columns names, identifying the input fields in order of their occurrence.

To skip fields leave the column name unspecified. There can be a gain in efficiency if the fields in serializer. Ensure input fields do not contain this character. NOTE: If serializer. Property Name Default Description channel — type — The component type name, needs to be logger maxBytesToLog 16 Maximum number of bytes of the Event body to log Example for agent named a1: a1. Property Name Default Description channel — type — The component type name, needs to be avro.

This will force the Avro Sink to reconnect to the next hop. This will allow the sink to connect to hosts behind a hardware load-balancer when news hosts are added without having to restart the agent. The compression-type must match the compression-type of matching AvroSource compression-level 6 The level of compression to compress event.

If not specified, then the global keystore will be used. If not specified, then the global keystore password will be used if defined. If not specified, then the global keystore type will be used if defined, otherwise the defautl is JKS. Property Name Default Description channel — type — The component type name, needs to be thrift. This will force the Thrift Sink to reconnect to the next hop.

In kerberos mode, client-principal, client-keytab and server-principal are required for successful authentication and communication to a kerberos enabled Thrift Source. Specifying 0 will disable rolling and cause all events to be written to a single file.

Builder interface. Property Name Default Description channel — type — The component type name, needs to be null. Property Name Default Description channel — type — The component type name, needs to be hbase table — The name of the table in Hbase to write to. This is the value for the property hbase.

Value of zookeeper. This might give better performance if there are multiple increments to a limited number of cells. RegexHbaseEventSerializer a1. Property Name Default Description channel — type — The component type name, needs to be hbase2 table — The name of the table in HBase to write to. RegexHBase2EventSerializer a1. Property Name Default Description channel — type — The component type name, needs to be asynchbase table — The name of the table in Hbase to write to. SimpleAsyncHbaseEventSerializer serializer.

These properties have precedence over the old zookeeperQuorum and znodeParent values. You can find the list of the available properties at the documentation page of AsyncHBase.

SimpleAsyncHbaseEventSerializer a1. MorphlineSolrSink Required properties are in bold. Property Name Default Description channel — type — The component type name, needs to be org. MorphlineSolrSink morphlineFile — The relative or absolute path on the local file system to the morphline configuration file. The transaction commits after this duration or when batchSize is exceeded, whichever comes first. MorphlineHandler isProductionMode false This flag should be enabled for mission critical, large-scale online production systems that need to make progress without downtime when unrecoverable exceptions occur.

Corrupt or malformed parser input data, parser bugs, and errors related to unknown Solr schema fields produce unrecoverable exceptions.

SolrServerException Comma separated list of recoverable exceptions that tend to be transient, in which case the corresponding task can be retried. Examples include network connection errors, timeouts, etc. When the production mode flag is set to true, the recoverable exceptions configured using this parameter will not be ignored and hence will lead to retries. This enables the sink to make progress and avoid retrying an event forever. MorphlineSolrSink a1. ElasticSearchSink Required properties are in bold.

TTL is accepted both in the earlier form of integer only e. Example a1. Note Header substitution is a handy to use the value of an event header to dynamically decide the indexName and indexType to use when storing the event. ElasticSearchDynamicSerializer a1. Property Name Default Description channel — type — Must be org. DatasetSink kite. This setting only applies to flushable datasets. These files need to be recovered by hand for the data to be visible to DatasetReaders. This setting only applies to syncable datasets.

When the kite. Valid values are avro and the fully-qualified class name of an implementation of the EntityParser. The default value, retry , will fail the current batch and try again which matches the old behavior.

Other valid values are save , which will write the raw Event to the kite. Required when the kite. Required properties are marked in bold font. Property Name Default Description type — Must be set to org.

KafkaSink kafka. The format is comma separated list of hostname:port kafka. If this parameter is configured, messages will be published to this topic. Arbitrary header substitution is supported, eg. Larger batches improve throughput while adding latency. Accepted values are 0 Never wait for acknowledgement , 1 wait for leader only , -1 wait for all replicas Set this to -1 to avoid data loss in some cases of leader failure. Set to true to store events as the Flume Avro binary format.

Used in conjunction with the same property on the KafkaSource or with the parseAsFlumeEvent property on the Kafka Channel this will preserve any Flume headers for the producing side. If the value represents an invalid partition, an EventDeliveryException will be thrown.

If the header value is present then this setting overrides defaultPartitionId. Care should be taken when using in conjunction with the Kafka Source topicHeader property to avoid creating a loopback. Any producer property supported by Kafka can be used.

KafkaSink a1. Property Name Default Description channel — type — The component type name, needs to be http. CODE — Configures a specific backoff for an individual i. CODE — Configures a specific rollback for an individual i.

CODE — Configures a specific metrics increment for an individual i. Any empty or null events are consumed without any request being made to the HTTP endpoint. MySink a1. Property Name Default Description type — The component type name, needs to be memory capacity The maximum number of events stored in the channel transactionCapacity The maximum number of events the channel will take from a source or give to a sink per transaction keep-alive 3 Timeout in seconds for adding or removing an event byteCapacityBufferPercentage 20 Defines the percent of buffer between byteCapacity and the estimated total size of all events in the channel, to account for data in headers.

The implementation only counts the Event body , which is the reason for providing the byteCapacityBufferPercentage configuration parameter as well. Note that if you have multiple memory channels on a single JVM, and they happen to hold the same physical events i. Setting this value to 0 will cause this value to fall back to a hard internal limit of about GB. Property Name Default Description type — The component type name, needs to be jdbc db. Kafka provides high availability and replication, so in case an agent or a kafka broker crashes, the events are immediately available to other sinks The Kafka channel can be used for multiple scenarios: With Flume source and sink - it provides a reliable and highly available channel for events With Flume source and interceptor but no sink - it allows writing Flume events into a Kafka topic, for use by other apps With Flume sink, but no source - it is a low-latency, fault tolerant way to send events from Kafka to Flume sinks such as HDFS, HBase or Solr This currently supports Kafka server releases 0.

The configuration parameters are organized as such: Configuration values related to the channel generically are applied at the channel config level, eg: a1. KafkaChannel kafka. Multiple channels must use the same topic and group to ensure that when one agent fails another can get the data Note that having non-channel consumers with the same ID can lead to data loss. This should be true if Flume source is writing to the channel and false if other producers are writing into the topic that the channel is using.

Flume source messages to Kafka can be parsed outside of Flume by using org. If the value represents an invalid partition the event will not be accepted into the channel.

Deprecated Properties Property Name Default Description brokerList — List of brokers in the Kafka cluster used by the channel This can be a partial list of brokers, but we recommend at least two for HA. The format is comma separated list of hostname:port topic flume-channel Use kafka. This should be true to support seamless Kafka client migration from older versions of Flume. Once migrated this can be set to false, though that should generally not be required. If no Zookeeper offset is found the kafka.

Note Due to the way the channel is load balanced, there may be duplicate events when the agent first starts up. KafkaChannel a1. Property Name Default Description type — The component type name, needs to be file. If this is set to true , backupCheckpointDir must be set backupCheckpointDir — The directory where the checkpoint is backed up to. Using multiple directories on separate disks can improve file channel peformance transactionCapacity The maximum size of transaction supported by the channel checkpointInterval Amount of time in millis between checkpoints maxFileSize Max size in bytes of a single log file minimumRequiredSpace Minimum Required free space in bytes.

Creating a checkpoint on close speeds up subsequent startup of the file channel by avoiding replay. To disable use of in-memory queue, set this to zero. To disable use of overflow, set this to zero. The keep-alive of file channel is managed by Spillable Memory Channel. In-memory queue is considered full if either memoryCapacity or byteCapacity limit is reached. Property Name Default Description selector.

Property Name Default Description sinks — Space-separated list of sinks that are participating in the group processor. A larger absolute value indicates higher priority processor.

With this disabled, in round-robin all the failed sinks load will be passed to the next sink in line and thus not evenly balanced Required properties are in bold. Property Name Default Description processor.

Configuration options are as follows: Property Name Default Description appendNewline true Whether a newline will be appended to each event at write time.

The default of true assumes that events do not contain newlines, for legacy reasons. Configuration options are as follows: Property Name Default Description syncIntervalBytes Avro sync interval, in approximate bytes. Schemas specified in the header ovverride this option.

Interceptors are named components, here is an example of how they are created through configuration: a1. Property Name Default Description type — The component type name, has to be host preserveExisting false If the host header already exists, should it be preserved - true or false useIP true Use the IP Address if true, else use hostname.

Property Name Default Description type — The component type name, has to be static preserveExisting true If configured header already exists, should it be preserved - true or false key key Name of header that should be created value value Static value that should be created Example for agent named a1: a1.

Default is a comma surrounded by any number of whitespace characters matching — All the headers which names match this regular expression are removed. Property Name Default Description type — The component type name has to be org. Assumed by default to be UTF Example configuration: a1. See example below Flume provides built-in support for the following serializers: org. RegexExtractorInterceptorMillisSerializer serializers.

RegexExtractorInterceptorSerializer serializers. RegexExtractorInterceptorMillisSerializer a1. No property value is needed when setting this property eg, just specifying -Dflume. If hadoop is installed the agent adds it to the classpath automatically Property Name Default Description type — The component type name has to be hadoop credential. The file must e on the classpath.

Property Name Default Description Hostname — The hostname on which a remote Flume agent is running with an avro source. UnsafeMode false If true, the appender will not throw exceptions on failure to send the events. Sample log4j. Log4jAppender log4j. MaxBackoff — A long value representing the maximum amount of time in milliseconds the Load balancing client will backoff from a node that has failed to consume an event.

Defaults to no backoff UnsafeMode false If true, the appender will not throw exceptions on failure to send the events. LoadBalancingLog4jAppender log4j.

By default, Flume sends in Ganglia 3. MyEventValidator -DmaxSize Is Flume a good fit for your problem? For other use cases, here are some guidelines: Flume is designed to transport and ingest regularly-generated event data over relatively stable, potentially complex topologies.

Channel memory org. MemoryChannel org. Channel jdbc org. JdbcChannel org. Channel file org. FileChannel org. Channel — org. PseudoTxnMemoryChannel org. MyChannel org. Source avro org.

AvroSource org. Source netcat org. NetcatSource org. Source seq org. SequenceGeneratorSource org. Source exec org. ExecSource org. Source syslogtcp org. SyslogTcpSource org. Source syslogudp org. SyslogUDPSource org. Source spooldir org. SpoolDirectorySource org. Source http org. HTTPSource org.

Source thrift org. ThriftSource org. Source jms org. JMSSource org. Source — org. AvroLegacySource org. ThriftLegacySource org. MySource org. Sink null org. NullSink org. Sink logger org.

LoggerSink org. Sink avro org. AvroSink org. Sink hdfs org. Sink hbase org. HBaseSink org. Sink hbase2 org. HBase2Sink org. Sink asynchbase org. AsyncHBaseSink org. Sink elasticsearch org. ElasticSearchSink org. RollingFileSink org. Sink irc org. IRCSink org. Sink thrift org. ThriftSink org. Sink — org. MySink org. ChannelSelector replicating org.

ReplicatingChannelSelector org. ChannelSelector multiplexing org. MultiplexingChannelSelector org. ChannelSelector — org. MyChannelSelector org. SinkProcessor default org. DefaultSinkProcessor org. SinkProcessor failover org.

FailoverSinkProcessor org. LoadBalancingSinkProcessor org. SinkProcessor — org. Interceptor timestamp org. Interceptor host org. Interceptor static org.

MyKeyProvider org. CipherProvider aesctrnopadding org. CipherProvider — org. MyCipherProvider org. Alias Name Alias Type a a gent c c hannel r sou r ce k sin k g sink g roup i i nterceptor y ke y h h ost s s erializer. Protocols to include when calculating enabled protocols. Cipher suites to include when calculating enabled cipher suites. The component type name, needs to be avro.



0コメント

  • 1000 / 1000