JVM源码分析之一个Java进程究竟能创建多少线程

概述

虽然这篇文章的标题打着JVM源码分析的旗号,不过本文不仅仅从JVM源码角度来分析,更多的来自于Linux Kernel的源码分析,今天要说的是JVM里比较常见的一个问题

这个问题可能有几种表述

  • 一个Java进程到底能创建多少线程?
  • 到底有哪些因素决定了能创建多少线程?
  • java.lang.OutOfMemoryError: unable to create new native thread的异常究竟是怎么回事

不过我这里先声明下可能不能完全百分百将各种因素都理出来,因为毕竟我不是做Linux Kernel开发的,还有不少细节没有注意到的,我将我能分析到的因素和大家分享一下,如果大家在平时工作中还碰到别的因素,欢迎在文章下面留言,让更多人参与进来讨论

从JVM说起

线程大家都熟悉,new Thread().start()即会创建一个线程,这里我首先指出一点new Thread()其实并不会创建一个真正的线程,只有在调用了start方法之后才会创建一个线程,这个大家分析下Java代码就知道了,Thread的构造函数是纯Java代码,start方法会调到一个native方法start0里,而start0其实就是JVM_StartThread这个方法

JVM_ENTRY(void, JVM_StartThread(JNIEnv* env, jobject jthread))

  ...
          
      // We could also check the stillborn flag to see if this thread was already stopped, but
      // for historical reasons we let the thread detect that itself when it starts running

      jlong size =
             java_lang_Thread::stackSize(JNIHandles::resolve_non_null(jthread));
      // Allocate the C++ Thread structure and create the native thread.  The
      // stack size retrieved from java is signed, but the constructor takes
      // size_t (an unsigned type), so avoid passing negative values which would
      // result in really large stacks.
      size_t sz = size > 0 ? (size_t) size : 0;
      native_thread = new JavaThread(&thread_entry, sz);
        
  ...    

  if (native_thread->osthread() == NULL) {
    ...
    THROW_MSG(vmSymbols::java_lang_OutOfMemoryError(),
              "unable to create new native thread");
  }

  Thread::start(native_thread);

JVM_END

从上面代码里首先要大家关注下最后的那个if判断if (native_thread->osthread() == NULL) ,如果osthread为空,那将会抛出大家比较熟悉的unable to create new native thread OOM异常,因此osthread为空非常关键,后面会看到什么情况下osthread会为空

另外大家应该注意到了native_thread = new JavaThread(&thread_entry, sz),在这里才会真正创建一个线程

JavaThread::JavaThread(ThreadFunction entry_point, size_t stack_sz) :
  Thread()
#ifndef SERIALGC
  , _satb_mark_queue(&_satb_mark_queue_set),
  _dirty_card_queue(&_dirty_card_queue_set)
#endif // !SERIALGC
{
  if (TraceThreadEvents) {
    tty->print_cr("creating thread %p", this);
  }
  initialize();
  _jni_attach_state = _not_attaching_via_jni;
  set_entry_point(entry_point);
  // Create the native thread itself.
  // %note runtime_23
  os::ThreadType thr_type = os::java_thread;
  thr_type = entry_point == &compiler_thread_entry ? os::compiler_thread :
                                                     os::java_thread;
  os::create_thread(this, thr_type, stack_sz);

}

上面代码里的os::create_thread(this, thr_type, stack_sz)会通过pthread_create来创建线程,而Linux下对应的实现如下:

bool os::create_thread(Thread* thread, ThreadType thr_type, size_t stack_size) {
  assert(thread->osthread() == NULL, "caller responsible");

  // Allocate the OSThread object
  OSThread* osthread = new OSThread(NULL, NULL);
  if (osthread == NULL) {
    return false;
  }

  // set the correct thread state
  osthread->set_thread_type(thr_type);

  // Initial state is ALLOCATED but not INITIALIZED
  osthread->set_state(ALLOCATED);

  thread->set_osthread(osthread);

  // init thread attributes
  pthread_attr_t attr;
  pthread_attr_init(&attr);
  pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED);

  // stack size
  if (os::Linux::supports_variable_stack_size()) {
    // calculate stack size if it's not specified by caller
    if (stack_size == 0) {
      stack_size = os::Linux::default_stack_size(thr_type);

      switch (thr_type) {
      case os::java_thread:
        // Java threads use ThreadStackSize which default value can be
        // changed with the flag -Xss
        assert (JavaThread::stack_size_at_create() > 0, "this should be set");
        stack_size = JavaThread::stack_size_at_create();
        break;
      case os::compiler_thread:
        if (CompilerThreadStackSize > 0) {
          stack_size = (size_t)(CompilerThreadStackSize * K);
          break;
        } // else fall through:
          // use VMThreadStackSize if CompilerThreadStackSize is not defined
      case os::vm_thread:
      case os::pgc_thread:
      case os::cgc_thread:
      case os::watcher_thread:
        if (VMThreadStackSize > 0) stack_size = (size_t)(VMThreadStackSize * K);
        break;
      }
    }

    stack_size = MAX2(stack_size, os::Linux::min_stack_allowed);
    pthread_attr_setstacksize(&attr, stack_size);
  } else {
    // let pthread_create() pick the default value.
  }

  // glibc guard page
  pthread_attr_setguardsize(&attr, os::Linux::default_guard_size(thr_type));

  ThreadState state;

  {
    // Serialize thread creation if we are running with fixed stack LinuxThreads
    bool lock = os::Linux::is_LinuxThreads() && !os::Linux::is_floating_stack();
    if (lock) {
      os::Linux::createThread_lock()->lock_without_safepoint_check();
    }

    pthread_t tid;
    int ret = pthread_create(&tid, &attr, (void* (*)(void*)) java_start, thread);

    pthread_attr_destroy(&attr);

    if (ret != 0) {
      if (PrintMiscellaneous && (Verbose || WizardMode)) {
        perror("pthread_create()");
      }
      // Need to clean up stuff we've allocated so far
      thread->set_osthread(NULL);
      delete osthread;
      if (lock) os::Linux::createThread_lock()->unlock();
      return false;
    }

    // Store pthread info into the OSThread
    osthread->set_pthread_id(tid);
     ...
  }
   ...
  return true;
}

如果在new OSThread的过程中就失败了,那显然osthread为NULL,那再回到上面第一段代码,此时会抛出java.lang.OutOfMemoryError: unable to create new native thread的异常,而什么情况下new OSThread会失败,比如说内存不够了,而这里的内存其实是C Heap,而非Java Heap,由此可见从JVM的角度来说,影响线程创建的因素包括了Xmx,MaxPermSize,MaxDirectMemorySize,ReservedCodeCacheSize等,因为这些参数会影响剩余的内存

另外注意到如果pthread_create执行失败,那通过thread->set_osthread(NULL)会设置空值,这个时候osthread也为NULL,因此也会抛出上面的OOM异常,导致创建线程失败,因此接下来要分析下pthread_create失败的因素

glibc中的pthread_create

stack_size

pthread_create的实现在glibc里,

int
__pthread_create_2_1 (pthread_t *newthread, const pthread_attr_t *attr,
              void *(*start_routine) (void *), void *arg)
{
  STACK_VARIABLES;

  const struct pthread_attr *iattr = (struct pthread_attr *) attr;
  struct pthread_attr default_attr;
  ...
  struct pthread *pd = NULL;
  int err = ALLOCATE_STACK (iattr, &pd);
  int retval = 0;

  if (__glibc_unlikely (err != 0))
    /* Something went wrong.  Maybe a parameter of the attributes is
       invalid or we could not allocate memory.  Note we have to
       translate error codes.  */
    {
      retval = err == ENOMEM ? EAGAIN : err;
      goto out;
    }
    
    ...
   
  }

上面我主要想说的一段代码是int err = ALLOCATE_STACK (iattr, &pd),顾名思义就是分配线程栈,简单来说就是根据iattr里指定的stackSize,通过mmap分配一块内存出来给线程作为栈使用

那我们来说说stackSize,这个大家应该都明白,线程要执行,要有一些栈空间,试想一下,如果分配栈的时候内存不够了,是不是创建肯定失败?而stackSize在JVM下是可以通过-Xss指定的,当然如果没有指定也有默认的值,下面是JDK6之后(含)默认值的情况

// return default stack size for thr_type
size_t os::Linux::default_stack_size(os::ThreadType thr_type) {
  // default stack size (compiler thread needs larger stack)
#ifdef AMD64
  size_t s = (thr_type == os::compiler_thread ? 4 * M : 1 * M);
#else
  size_t s = (thr_type == os::compiler_thread ? 2 * M : 512 * K);
#endif // AMD64
  return s;
}

估计不少人有一个疑问,栈内存到底属于-Xmx控制的Java Heap里的部分吗,这里明确告诉大家不属于,因此从glibc的这块逻辑来看,JVM里的Xss也是影响线程创建的一个非常重要的因素。

Linux Kernel里的clone

如果栈分配成功,那接下来就要创建线程了,大概逻辑如下

retval = create_thread (pd, iattr, true, STACK_VARIABLES_ARGS,
                  &thread_ran);

而create_thread其实是调用的系统调用clone

const int clone_flags = (CLONE_VM | CLONE_FS | CLONE_FILES | CLONE_SYSVSEM
               | CLONE_SIGHAND | CLONE_THREAD
               | CLONE_SETTLS | CLONE_PARENT_SETTID
               | CLONE_CHILD_CLEARTID
               | 0);

  TLS_DEFINE_INIT_TP (tp, pd);

  if (__glibc_unlikely (ARCH_CLONE (&start_thread, STACK_VARIABLES_ARGS,
                    clone_flags, pd, &pd->tid, tp, &pd->tid)
            == -1))
    return errno;

系统调用这块就切入到了Linux Kernel里

clone系统调用最终会调用do_fork方法,接下来通过剖解这个方法来分析Kernel里还存在哪些因素

max_user_processes

   retval = -EAGAIN;
    if (atomic_read(&p->real_cred->user->processes) >=
            task_rlimit(p, RLIMIT_NPROC)) {
        if (!capable(CAP_SYS_ADMIN) && !capable(CAP_SYS_RESOURCE) &&
            p->real_cred->user != INIT_USER)
            goto bad_fork_free;
    }

先看这么一段,这里其实就是判断用户的进程数有多少,大家知道在linux下,进程和线程其数据结构都是一样的,因此这里说的进程数可以理解为轻量级线程数,而这个最大值是可以通过ulimit -u可以查到的,所以如果当前用户起的线程数超过了这个限制,那肯定是不会创建线程成功的,可以通过ulimit -u value来修改这个值

max_map_count

在这个过程中不乏有malloc的操作,底层是通过系统调用brk来实现的,或者上面提到的栈是通过mmap来分配的,不管是malloc还是mmap,在底层都会有类似的判断

if (mm->map_count > sysctl_max_map_count)
        return -ENOMEM;

如果进程被分配的内存段超过sysctl_max_map_count就会失败,而这个值在linux下对应/proc/sys/vm/max_map_count,默认值是65530,可以通过修改上面的文件来改变这个阈值

max_threads

还存在max_threads的限制,代码如下

/*
     * If multiple threads are within copy_process(), then this check
     * triggers too late. This doesn't hurt, the check is only there
     * to stop root fork bombs.
     */
    retval = -EAGAIN;
    if (nr_threads >= max_threads)
        goto bad_fork_cleanup_count;

如果要修改或者查看可以通过/proc/sys/kernel/threads-max来操作,
这个值是受到物理内存的限制,在fork_init的时候就计算好了

    /*
     * The default maximum number of threads is set to a safe
     * value: the thread structures can take up at most half
     * of memory.
     */
    max_threads = mempages / (8 * THREAD_SIZE / PAGE_SIZE);

    /*
     * we need to allow at least 20 threads to boot a system
     */
    if(max_threads < 20)
        max_threads = 20;

pid_max

pid也存在限制

   if (pid != &init_struct_pid) {
        retval = -ENOMEM;
        pid = alloc_pid(p->nsproxy->pid_ns);
        if (!pid)
            goto bad_fork_cleanup_io;

        if (clone_flags & CLONE_NEWPID) {
            retval = pid_ns_prepare_proc(p->nsproxy->pid_ns);
            if (retval < 0)
                goto bad_fork_free_pid;
        }
    }

alloc_pid的定义如下

struct pid *alloc_pid(struct pid_namespace *ns)
{
    struct pid *pid;
    enum pid_type type;
    int i, nr;
    struct pid_namespace *tmp;
    struct upid *upid;

    pid = kmem_cache_alloc(ns->pid_cachep, GFP_KERNEL);
    if (!pid)
        goto out;

    tmp = ns;
    for (i = ns->level; i >= 0; i--) {
        nr = alloc_pidmap(tmp);
        if (nr < 0)
            goto out_free;

        pid->numbers[i].nr = nr;
        pid->numbers[i].ns = tmp;
        tmp = tmp->parent;
    }
    ...
}

alloc_pidmap中会判断pid_max,而这个值的定义如下


/*
 * This controls the default maximum pid allocated to a process
 */
#define PID_MAX_DEFAULT (CONFIG_BASE_SMALL ? 0x1000 : 0x8000)

/*
 * A maximum of 4 million PIDs should be enough for a while.
 * [NOTE: PID/TIDs are limited to 2^29 ~= 500+ million, see futex.h.]
 */
#define PID_MAX_LIMIT (CONFIG_BASE_SMALL ? PAGE_SIZE * 8 : \
    (sizeof(long) > 4 ? 4 * 1024 * 1024 : PID_MAX_DEFAULT))
    
int pid_max = PID_MAX_DEFAULT;

#define RESERVED_PIDS        300

int pid_max_min = RESERVED_PIDS + 1;
int pid_max_max = PID_MAX_LIMIT;

这个值可以通过/proc/sys/kernel/pid_max来查看或者修改

总结

通过对JVM,glibc,Linux kernel的源码分析,我们暂时得出了一些影响线程创建的因素,包括

  • JVM:XmxXssMaxPermSizeMaxDirectMemorySizeReservedCodeCacheSize
  • Kernel:max_user_processesmax_map_countmax_threadspid_max

由于对kernel的源码研读时间有限,不一定总结完整,大家可以补充

上一篇:SqlService过期的解决方案


下一篇:每一页都是干货,这10本Python新书,我必须推荐给你