Autonomous Identity 2022.11.6

Changelog

ForgeRock continuously provides updates to Autonomous Identity to introduce new features, fix known bugs and address security issues.

Key fixes

2022.11.6

This release contains the latest container images.

2022.11.5

This release contains a collection of security and bug fixes.

2022.11.4

This release contains a collection of security and bug fixes.

2022.11.3

The following bugs were fixed in this release as well as other security fixes:

  • AUTOID-3174: Need an assignments API

  • AUTOID-3362: Allow customer to change timeout for API container when run opensearch query

2022.11.2

The following bugs were fixed in this release:

  • AUTOID-3329: Misspelled http header for kibana conf

  • AUTOID-3331: Elasticsearch keystore and truststore password

2022.11.1

This release contains a collection of important security fixes.

2022.11.0

The following bugs were fixed in this release as well as other security fixes:

Known Issues

2022.11.6

There are no known issues in this release.

2022.11.5

There are no known issues in this release.

2022.11.4

There are no known issues in this release.

2022.11.3
  • Discovered regression

Autonomous Identity 2022.11.3 was originally released on 04-11-2023.

We discovered a regression where Apache Livy has log4j1 binaries included with the deployer. If you installed 2022.11.3 before 04/13/2023, run the steps below to upgrade log4j1 to log4j2.

If you installed 2022.11.3 after 04/13/2023, the binaries are updated, and you do not need to upgrade log4j1 binaries.

Update log4j1 to log4j2
  1. Stop the Apache Livy server:

    ~/livy/bin/livy-server stop
  2. Back up your old log4j and related jar files:

    cd ~/livy/jars
    mv log4j-1.2.16.jar ~/log4j-1.2.16.jar.bkp
    mv slf4j-log4j12-1.6.1.jar ~/slf4j-log4j12-1.6.1.jar.bkp
    mv slf4j-reload4j-1.7.36.jar ~/slf4j-reload4j-1.7.36.jar.bkp
    mv slf4j-api-1.7.25.jar ~/slf4j-api-1.7.25.jar.bkp
  3. Replace with log4j2 jar and its bridge jars:

  4. Under the conf folder, create a log4j2.properties file:

    cd ~/livy/conf
    vi log4j2.properties
  5. In your log4j2.properties file, adjust the log level and related configuration suited for your requirements:

    status = info
    name= RollingFileLogConfigDemo
    # Log files location
    property.basePath = ./logs
    # RollingFileAppender name, pattern, path and rollover policy
    appender.rolling.type = RollingFile
    appender.rolling.name = fileLogger
    appender.rolling.fileName= ${basePath}/autoid.log
    appender.rolling.filePattern= ${basePath}/autoid_%d{yyyyMMdd}.log.gz
    appender.rolling.layout.type = PatternLayout
    appender.rolling.layout.pattern = %d{yyyy-MM-dd HH:mm:ss.SSS} %level [%t] [%l] - %msg%n
    appender.rolling.policies.type = Policies
    # RollingFileAppender rotation policy
    appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
    appender.rolling.policies.size.size = 10MB
    appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
    appender.rolling.policies.time.interval = 1
    appender.rolling.policies.time.modulate = true
    appender.rolling.strategy.type = DefaultRolloverStrategy
    appender.rolling.strategy.delete.type = Delete
    appender.rolling.strategy.delete.basePath = ${basePath}
    appender.rolling.strategy.delete.maxDepth = 10
    appender.rolling.strategy.delete.ifLastModified.type = IfLastModified
    # Delete all files older than 30 days
    appender.rolling.strategy.delete.ifLastModified.age = 30d
    # Configure root logger
    rootLogger.level = info
    rootLogger.appenderRef.rolling.ref = fileLogger
    log4j1.compatibility = true
  6. Restart Apache Livy:

    cd ~/livy/
    ./bin/livy-server start
  7. Check that Apache Livy is up and running. You can access a log on an analytics jobs. Specific Autonomous Identity logs are at ~/livy/logs/autoid.log.

2022.11.2

There are no known issues in this release.

2022.11.1

There are no known issues in this release.

2022.11.0

There is a known issue with RHEL8/CentOS Stream 8 when Docker swarm overlay network configuration breaks when the outside network maximum transmission unit (mtu) is smaller than the default value. The mtu is the maximum size of the packet that can be transmitted from a network interface.

When deploying a multinode configuration on RHEL 8/CentOS Stream 8, run the following steps:

  1. Check mtu for docker0 and eth0 using ifconfig | grep mtu.

  2. Set the docker0 mtu value to be equal to eth0 using sudo ifconfig eth0 mtu 1500. Make sure to set the command on all nodes and also after each virtual machine reboot.

Deprecated

2022.11.0–2022.11.6

No functionality has been deprecated in these releases.

Removed

2022.11.0–2022.11.6

No functionality has been removed in these releases.

Copyright © 2010-2024 ForgeRock, all rights reserved.