本地 oracle 安装文件夹满触发 ORA-7445 [_memmove()+64] 导致Instance Crashed 的事故

近期处理了一个问题,原因是因为命中ORA-600 [kole_t2u], [34] - description, bugs 导致 在udump 文件夹下大量转储 出cdmp 文件,

然后这些 cdmp 撑爆本地磁盘空间。 在oracle 发现本地无空间可写一些日志时,又触发ORA-7445 [_memmove()+64]

而触发ORA-600 [kole_t2u], [34] 的 根本原因是由于业务程序的非法操作。在db 中某张含有 clob 字段的表中插入了一些oracle现有字符集(zh16gbk)无法处理的字符

故障模拟例如以下:

SQL> select * from v$version;

BANNER

--------------------------------------------------------------------------------

Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production

PL/SQL Release 11.2.0.2.0 - Production

CORE    11.2.0.2.0      Production

TNS for Linux: Version 11.2.0.2.0 - Production

NLSRTL Version 11.2.0.2.0 - Production

SQL> create table t3(a clob);

Table created.

SQL> insert into t3 values(utl_raw.cast_to_varchar2('EC'));

insert into t3 values(utl_raw.cast_to_varchar2('EC'))

            *

ERROR at line 1:

ORA-00600: internal error code, arguments: [kole_t2u], [34], [], [], [], [],[], [], [], [], [], []

SQL> select * from nls_database_parameters

PARAMETER                      VALUE

------------------------------ --------------------------------------------------------------------------------

NLS_LANGUAGE                   AMERICAN

NLS_TERRITORY                  AMERICA

NLS_CURRENCY                   $

NLS_ISO_CURRENCY               AMERICA

NLS_NUMERIC_CHARACTERS         .,

NLS_CHARACTERSET               ZHS16GBK

NLS_CALENDAR                   GREGORIAN

NLS_DATE_FORMAT                DD-MON-RR

NLS_DATE_LANGUAGE              AMERICAN

NLS_SORT                       BINARY

NLS_TIME_FORMAT                HH.MI.SSXFF AM

PARAMETER                      VALUE

------------------------------ --------------------------------------------------------------------------------

NLS_TIMESTAMP_FORMAT           DD-MON-RR HH.MI.SSXFF AM

NLS_TIME_TZ_FORMAT             HH.MI.SSXFF AM TZR

NLS_TIMESTAMP_TZ_FORMAT        DD-MON-RR HH.MI.SSXFF AM TZR

NLS_DUAL_CURRENCY              $

NLS_COMP                       BINARY

NLS_LENGTH_SEMANTICS           BYTE

NLS_NCHAR_CONV_EXCP            FALSE

NLS_NCHAR_CHARACTERSET         AL16UTF16

NLS_RDBMS_VERSION              11.2.0.2.0

20 rows selected.

 ORA-600 [kole_t2u], [34] 影响的版本号范围:

Oracle Database - Enterprise Edition - Version 9.2.0.1 to 11.2.0.4 [Release 9.2 to 11.2]

Information in this document applies to any platform.

Oracle Server Enterprise Edition - Version: 9.2.0.1 to 11.1.0.6

***Checked for relevance on 20-Jan-2014***

ORA-600 [kole_t2u], [34] 触发的情景:

As stated, this error can come up in multibyte environments. Secondly it is clear that whenever this error comes up, there must be at least 1 incomplete codepoint used in the data.

In general we can split these occurrences in 3 categories:

Invalid multibyte data is being inserted by an application into a CLOB

Invalid multibyte data has been inserted in a VARCHAR2 (without initially being detected), and the stored data is moved to a CLOB at a later stage (either through application code, or by a Oracle process like Auditing).

Existing correctly stored CLOB data is incorrectly "split" into chunks. This could leave a codepoint "split" in the middle of the byte stream, leaving a incorrect number of bytes for the last codepoint before the split. This could either happen in application
code, or could be due to bug in the database.

We will look at all these 3 categories in depth in the following paragraphs.

最后的处理方法:

1. 要求应用依据报错记录去审查业务程序

2. 为了避免 cdmp 过大撑爆本地空间。 设置 max_dump_file_size

MAX_DUMP_FILE_SIZE

Property Description
Parameter type String
Syntax MAX_DUMP_FILE_SIZE = {integer
[K | M] | UNLIMITED }
Default value UNLIMITED
Modifiable ALTER SESSION,
ALTER SYSTEM
Range of values 0 to unlimited, or UNLIMITED
Basic No

MAX_DUMP_FILE_SIZE specifies the maximum size of trace files (excluding the alert file). Change this
limit if you are concerned that trace files may use too much space.

  • A numerical value for MAX_DUMP_FILE_SIZE specifies the maximum size in operating system blocks.

  • A number followed by a K or M suffix specifies the file size in kilobytes or megabytes.

  • The special value string UNLIMITED means that there is no upper limit on trace file size. Thus, dump files can be as large as the operating system permits.

odm fund:

ORA-600 [kole_t2u], [34] - description, bugs, and reasons (文档 ID 734474.1)

ORA-600 [kole_t2u] With Multibyte Character Sets While Appending To LOB In a Loop (文档 ID 739282.1)



AL32UTF8 / UTF8 (Unicode) Database Character Set Implications (文档 ID 788156.1)

Character set conversion when using UTL_FILE (文档 ID 227531.1)

Getting ORA-7445 [_memmove()+64] and Instance Crashed (文档 ID 1294148.1) 转究竟部转究竟部

In this Document

Symptoms

Cause

Solution

This document is being delivered to you via Oracle Support's Rapid Visibility (RaV) process and therefore has not been subject to an independent technical review.

APPLIES TO:

Oracle Database - Enterprise Edition - Version 10.2.0.4 to 11.2.0.2 [Release 10.2 to 11.2]

Information in this document applies to any platform.

SYMPTOMS

In the alert log file there are reported errors like:

ORA-07445: exception encountered: core dump [_memmove()+64] [SIGBUS]

                       [Invalid address alignment] [0x0.........] [] []

and instance crashed.

The trace files generated by these errors are truncated (not relevant) or have no information (0 bytes).

CAUSE

In this case the issue is caused by a resource problem outside Oracle.

By checking the OS system log, you can see that the Oracle mount point has ran out of space just before the errors and until the instance crash.

For example in an HP-UX system, the messages can be as the following:

vmunix: vxfs: NOTICE: msgcnt 1 mesg 001: V-2-1: vx_nospace - /dev/vg01/lvora file system full (1 block extent)

SOLUTION

Monitor your system in order to avoid running out of space in Oracle mount points.

上一篇:使用Anaconda3安装tensorflow,opencv,使其可以在spyder中运行


下一篇:Apache Curator: Zookeeper客户端