分类: 技术

  • 服务器重启后PM2里进程丢失?

    服务器重启后PM2里进程丢失?

    服务器重启后,发现PM2 ls 命令中的进程全部丢失,怎么保证以后重启不丢失呢?

    • 使用pm2 start启动您的应用程序
    • 设置开机启动pm2,注意,会返回一段命令需要手动执行
    • 执行上个步骤返回的命令
    • 使用save保存您当前的进程列表
    • 验证结果
    # 1.使用pm2 start启动您的应用程序
    pm2 start
    # 2. 设置开机启动pm2,注意,会返回一段命令需要手动执行
    pm2 startup
    # 3. 返回的大概长这样 sudo env PATH=$PATH:/www/server/nodejs/vxx/bin
    # /www/server/nodejs/vxx/lib/node_modules/pm2/bin/pm2 startup systemd -u ubuntu --hp /home/ubuntu  执行它
    # 4. 使用save保存您当前的进程列表
    pm2 save

    重启服务器后 使用pm2 ls命令查看是否恢复了您的进程。

    启动应用并设置startup
    保存进程 重启服务器测试一下

  • WordPress中的wp-config.php做什么的

    WordPress中的wp-config.php做什么的

    “wp-config.php”文件是每个自托管 WordPress 网站的重要组成部分。它包含重要的设置,帮助 WordPress 连接到您的数据库并顺利运行。

    此文件不包含在 WordPress 的默认下载中。它会在您输入数据库详细信息时在安装过程中自动创建。

    如果此文件中的信息不正确,您的网站将无法连接到数据库。这时,您可能会看到令人恐惧的“建立数据库连接时出错”消息。

    除了数据库详细信息外,此文件还可能包含调试、安全密钥、内存限制等设置。我们将在本文后面介绍这些内容。

    大多数人不需要经常修改 wp-config.php 文件。但了解它的工作原理以及如何安全地编辑它,可以让您更好地控制您的 WordPress 网站。

    如果您读到这里,可能已经准备好进行更改了。我们将引导您了解如何以最安全的方式编辑此文件,而不会造成任何问题。

    开始编辑这个文件前请先备份!

  • 认识WordPress目录结构

    认识WordPress目录结构

    您想了解 WordPress 的文件和目录结构吗?

    所有 WordPress 核心文件、主题、插件以及用户上传内容都存储在您的网站托管服务器上。

    在本入门指南中,我们将为您详解 WordPress 的文件和目录结构体系。

    WordPress file and directory structure explained for beginners

    为什么你需要了解和学习WordPress目录结构

    大多数用户其实无需了解WordPress的文件目录结构也能顺利运营网站。不过,掌握这些知识就像获得了一把”万能钥匙”,能让你自己解决很多常见的WordPress故障。

    本指南将带你解锁以下技能:

    • 认识主要文件与文件夹
    • 了解WordPress是如何存储你的图片与媒体文件
    • 了解WordPress把你的图片和媒体文件存到哪里了
    • WordPress的配置文件在哪里

    这些信息还可以帮助您了解 WordPress 的后台工作原理以及应该备份哪些 WordPress 文件。

    话虽如此,让我们来看看 WordPress 的文件和目录结构。

    访问 WordPress 文件和目录

    您的 WordPress 文件和目录存储在您的服务器上。您可以使用 FTP 客户端访问这些文件。请参阅我们的指南,了解如何使用 FTP 上传 WordPress 文件,获取详细说明。

    FTP 的一个更简单的替代方案是大多数 WordPress 托管控制面板内置的文件管理器应用程序。

    File manager app in hosting control panel

    一旦您使用 FTP 或文件管理器连接到 WordPress 网站,您将看到如下所示的文件和目录结构:

    WordPress files and folders

    在根文件夹中,您将看到 WordPress 的核心文件和文件夹。这些文件和文件夹用于运行您的 WordPress 网站。

    除了 .htaccess 和 wp-config.php 文件外,您不应自行编辑其他文件。

    以下是您在 WordPress 网站根目录中看到的 WordPress 核心文件和文件夹的列表。

    • wp-admin [文件夹]
    • wp-content [文件夹]
    • wp-includes [文件夹]
    • index.php
    • license.txt
    • readme.html
    • wp-activate.php
    • wp-blog-header.php
    • wp-comments-post.php
    • wp-config-sample.php
    • wp-cron.php
    • wp-links-opml.php
    • wp-load.php
    • wp-login.php
    • wp-mail.php
    • wp-settings.php
    • wp-signup.php
    • wp-trackback.php
    • xmlrpc.php

    上面的列表没有 .htaccess和wp-config.php两个文件. 因为这两个文件只在WordPress安装以后存在。

    WordPress 配置文件

    您的 WordPress 根目录包含一些特殊的配置文件。这些文件包含您的 自定义的WordPress 网站的设置。

    WordPress configuration files
    • .htaccess – 服务器配置文件,WordPress 使用它来管理永久链接和重定向。
    • wp-config.php – 此文件告诉 WordPress 如何连接到您的数据库。它还为您的 WordPress 网站设置了一些全局设置。
    • index.php – 这个文件是WordPress的入口,当用户请求页面时启动并且初始化所有WP文件。

    有时您可能需要编辑 wp-config.php 或 .htaccess 文件。编辑这两个文件时请务必小心。稍有不慎就可能导致您的网站无法访问。编辑这两个文件时,请务必在进行任何更改之前在计算机上创建备份。

    如果您在根目录中没有看到 .htaccess 文件,请查看我们的指南,了解为什么您在 WordPress 根目录中找不到 .htaccess 文件。

    根据您的 WordPress 网站的设置方式,您的根目录中可能包含或不包含以下文件。

    • robots.txt – 包含搜索引擎爬虫的说明
    • Favicon.ico – WordPress 主机有时会生成一个图标文件。

    在wp-content目录里

    WordPress 将所有上传、插件和主题存储在 wp-content 文件夹中。

    WordPress content folder

    人们通常认为您可以在 wp-content 文件夹中编辑文件和文件夹。然而,这并非完全正确。

    让我们来看看 wp-content 文件夹的内部,了解它的工作原理以及您可以在这里执行的操作。

    Inside wp-content folder

    wp-content 文件夹的内容可能因 WordPress 网站而异。但所有 WordPress 网站通常都包含以下内容:

    • [文件夹] themes
    • [文件夹] plugins
    • [文件夹] uploads
    • index.php

    WordPress 将您的主题文件存储在 /wp-content/themes/ 文件夹中。您可以编辑主题文件,但通常不建议这样做。一旦您将主题更新到较新版本,您的更改将在更新过程中被覆盖。

    这就是为什么建议为 WordPress 主题定制创建子主题。

    您下载并安装到网站上的所有 WordPress 插件都存储在 /wp-content/plugins/ 文件夹中。您不能直接编辑插件文件,除非您自己编写了网站专用的 WordPress 插件。

    在许多 WordPress 教程中,您会看到可以添加到 WordPress 网站的代码片段。您可以将自定义代码添加到 WordPress 网站,方法是将其添加到子主题的 functions.php 文件,或者创建网站专用的插件。

    但是,添加自定义代码最简单、最安全的方法是使用像 WPCode 这样的代码片段插件。有关分步说明,请参阅本指南,了解如何在 WordPress 中轻松添加自定义代码。

    WordPress 将您上传的所有图片和媒体文件存储在 /wp-content/uploads/ 文件夹中。默认情况下,上传文件存储在 /year/month/ 文件夹中。无论何时创建 WordPress 备份,都应包含 uploads 文件夹。

    您可以从 WordPress 核心、主题和已安装插件的来源下载最新版本。但是,如果您丢失了 uploads 文件夹,那么在没有备份的情况下恢复它将非常困难。

    您可能还会在 wp-content 目录中看到其他一些默认文件夹。

    • languages – WordPress 将非英语 WordPress 网站的语言文件存储在此文件夹中。
    • upgrade – 这是 WordPress 升级到新版本时创建的临时文件夹。

    许多 WordPress 插件也可能会在您的 wp-content 文件夹中创建自己的文件夹来存储文件。

    一些 WordPress 插件可能会在 /wp-content/uploads/ 文件夹内创建文件夹来保存用户上传的内容。例如,此演示网站包含由 Smash Balloon 、 WooCommerce 、 SeedProd 和 WPForms 插件创建的文件夹。

    Plugins may create their own folders inside uploads directory

    其中一些文件夹可能包含重要文件。因此,我们建议您备份所有此类文件夹以防万一。

    其他文件夹可能包含您可以安全删除的文件。例如,您的缓存插件(如 WP Rocket)可能会创建文件夹来保存缓存数据。

    好了,希望本文能帮助您了解 WordPress 的文件和目录结构。您可能还想查看我们使用 phpMyAdmin 管理 WordPress 数据库的初学者指南,以及如何在没有任何编码知识的情况下创建自定义 WordPress 主题的教程。

  • 常用开源协议介绍

    常用开源协议介绍

    开源协议别名LGPL许可证。LGPL许可证是LESSER GENERAL PUBLIC LICENSE的简写,也叫LIBRARY GENERAL PUBLIC LICENSE,中文译为“较宽松公共许可证”或者“函数库公共许可证”。该许可证适用于一些由自由软件基金会与其它决定使用此许可证的软件作者所特殊设计的软件包─比如函数库(即Library)。

    除了大家比较熟悉的GPL协议之外,开源界还有很多许可证,如LGPL许可证、BSD许可证等,下面就来一一介绍。

    LGPL许可证,也是自由软件联盟GNU开源软件许可证的一种,大部分的 GNU软件,包括一些函数库,是受到原来的 GPL许可证保护的。而LGPL许可证,适用于特殊设计的函数库,且与原来的通用公共许可证有很大的不同,给予了被许可人较为宽松的权利,所以叫“较宽松公共许可证”。在特定的函数库中使用它,以准许非自由的程序可以与这些函数库连结。

    当一个程序与一个函数库连结,不论是静态连结或使用共享函数库,二者的结合可以合理地说是结合的作品,一个原来的函数库的衍生品。因此,原来的通用公共许可证只有在整个结合品满足其自由的标准时,才允许连结。较宽松通用公共许可则以更宽松的标准允许其它程序代码与本函数库连结。例如,在少数情况下,可能会有特殊的需要而鼓励大家尽可能广泛地使用特定的函数库,因而使它成为实际上的标准。为了达到此目标,必须允许非自由的程序使用此函数库。一个较常发生的情况是,一个自由的函数库与一个被广泛使用的非自由函数库做相同的工作,在此情况下,限制只有自由软件可以使用此自由函数库不会有多少好处,故我们使用了LGPL许可证。

    在其他情况下,允许非自由程序使用特定的函数库,可以让更多的人们使用自由软件的大部分。例如,允许非自由程序使用GNU C函数库,可以让更多的人们使用整个GNU作业系统,以及它的变形,GNU/Linux操作系统。

    尽管LGPL许可证对使用者的自由保护是较少的,但它却能确保与此函数库连结的程序的使用者拥有自由,而且具有使用修改过的函数库版本来执行该程序的必要方法。

    MPL

    MPL是The Mozilla Public License的简写,是1998年初Netscape的 Mozilla小组为其开源软件项目设计的软件许可证。MPL许可证出现的最重要原因就是,Netscape公司认为GPL许可证没有很好地平衡开发者对源代码的需求和他们利用源代码获得的利益。同著名的GPL许可证和BSD许可证相比,MPL在许多权利与义务的约定方面与它们相同(因为都是符合OSIA认定的开源软件许可证)。但是,相比而言MPL还有以下几个显著的不同之处:

    ◆ MPL虽然要求对于经MPL许可证发布的源代码的修改也要以MPL许可证的方式再许可出来,以保证其他人可以在MPL的条款下共享源代码。但是,在MPL许可证中对“发布”的定义是“以源代码方式发布的文件”,这就意味着MPL允许一个企业在自己已有的源代码库上加一个接口,除了接口程序的源代码以MPL许可证的形式对外许可外,源代码库中的源代码就可以不用MPL许可证的方式强制对外许可。这些,就为借鉴别人的源代码用做自己商业软件开发的行为留了一个活口。

    ◆ MPL许可证第三条第7款中允许被许可人将经过MPL许可证获得的源代码同自己其他类型的代码混合得到自己的软件程序。

    ◆ 对软件专利的态度,MPL许可证不像GPL许可证那样明确表示反对软件专利,但是却明确要求源代码的提供者不能提供已经受专利保护的源代码(除非他本人是专利权人,并书面向公众免费许可这些源代码),也不能在将这些源代码以开放源代码许可证形式许可后再去申请与这些源代码有关的专利。

    ◆ 对源代码的定义

    而在MPL(1.1版本)许可证中,对源代码的定义是:“源代码指的是对作品进行修改最优先择取的形式,它包括:所有模块的所有源程序,加上有关的接口的定义,加上控制可执行作品的安装和编译的‘原本’(原文为‘Script’),或者不是与初始源代码显著不同的源代码就是被源代码贡献者选择的从公共领域可以得到的程序代码。”

    ◆ MPL许可证第3条有专门的一款是关于对源代码修改进行描述的规定,就是要求所有再发布者都得有一个专门的文件就对源代码程序修改的时间和修改的方式有描述。

    BSD

    BSD开源协议是一个给于使用者很大自由的协议。基本上使用者可以”为所欲为”,可以自由的使用,修改源代码,也可以将修改后的代码作为开源或者专有软件再发布。

    但”为所欲为”的前提当你发布使用了BSD协议的代码,或者以BSD协议代码为基础做二次开发自己的产品时,需要满足三个条件:

    ◆如果再发布的产品中包含源代码,则在源代码中必须带有原来代码中的BSD协议。

    ◆如果再发布的只是二进制类库/软件,则需要在类库/软件的文档和版权声明中包含原来代码中的BSD协议。

    ◆不可以用开源代码的作者/机构名字和原来产品的名字做市场推广。

    BSD 代码鼓励代码共享,但需要尊重代码作者的著作权。BSD由于允许使用者修改和重新发布代码,也允许使用或在BSD代码上开发商业软件发布和销售,因此是对 商业集成很友好的协议。而很多的公司企业在选用开源产品的时候都首选BSD协议,因为可以完全控制这些第三方的代码,在必要的时候可以修改或者二次开发

    GPL

    我们很熟悉的Linux就是采用了GPLGPL协议BSD, Apache Licence等鼓励代码重用的许可很不一样。GPL的出发点是代码的开源/免费使用和引用/修改/衍生代码的开源/免费使用,但不允许修改后和衍生的代 码做为闭源商业软件发布和销售。这也就是为什么我们能用免费的各种linux,包括商业公司的linux和linux上各种各样的由个人,组织,以及商 业软件公司开发的免费软件了。

    GPL协议的主要内容是只要在一个软件中使用(”使用”指类库引用,修改后的代码或者衍生代码)GPL 协议的产品,则该软件产品必须也采用GPL协议,既必须也是开源和免费。这就是所谓的”传染性”。GPL协议的产品作为一个单独的产品使用没有任何问题, 还可以享受免费的优势。

    由于GPL严格要求使用了GPL类库的软件产品必须使用GPL协议,对于使用GPL协议的开源代码,商业软件或者对代码有保密要求的部门就不适合集成/采用作为类库和二次开发的基础。

    其它细节如再发布的时候需要伴随GPL协议等和BSD/Apache等类似。

    MIT

    MIT是和BSD一样宽范的许可协议,作者只想保留版权,而无任何其它的限制。也就是说,你必须在你的发行版里包含原许可协议的声明,无论你是以二进制发布的还是以源代码发布的。MIT协议又称麻省理工学院许可证,最初由麻省理工学院开发。被授权人权利:1、被授权人有权利使用、复制、修改、合并、出版发行、散布、再授权及贩售软件及软件的副本。2、被授权人可根据程式的需要修改授权条款为适当的内容。被授权人义务:在软件和软件的所有副本中都必须包含版权声明和许可声明。

    AL2.0

    Apache Licence是著名的非盈利开源组织Apache采用的协议。该协议和BSD类似,同样鼓励代码共享和尊重原作者的著作权,同样允许代码修改,再发布(作为开源或商业软件)。需要满足的条件也和BSD类似:

    ◆需要给代码的用户一份Apache Licence

    ◆如果你修改了代码,需要在被修改的文件中说明。

    ◆在延伸的代码中(修改和有源代码衍生的代码中)需要带有原来代码中的协议,商标,专利声明和其他原来作者规定需要包含的说明。

    ◆如果再发布的产品中包含一个Notice文件,则在Notice文件中需要带有Apache Licence。你可以在Notice中增加自己的许可,但不可以表现为对Apache Licence构成更改。

    Apache Licence也是对商业应用友好的许可。使用者也可以在需要的时候修改代码来满足需要并作为开源或商业产品发布/销售。

  • 怎么上传SVG到WordPress

    怎么上传SVG到WordPress

    众所周知,WP天生不支持SVG文件上传,尝试上传SVG你会看到如下错误

    我们怎么解决呢? 两种方法,一种下载插件如 SVG Support,另一种利用WP提供的钩子upload_mimes

    • 进到Panel或者其他文件编辑器 wp根目录-> 文件管理.
    • 编辑 wp-includes/functions.php文件
    function add_file_types_to_uploads($file_types){
      $new_filetypes = array();
      $new_filetypes['svg'] = 'image/svg+xml';
      $file_types = array_merge($file_types, $new_filetypes );
      return $file_types;
    }
    add_filter('upload_mimes', 'add_file_types_to_uploads');

    把上面的代码粘贴到文件最后一行,刷新后台就可以上传了。

  • Control-flow Integrity in V8

    Control-flow Integrity in V8

    Published 09 October 2023 · Tagged with security

    Control-flow integrity (CFI) is a security feature aiming to prevent exploits from hijacking control-flow. The idea is that even if an attacker manages to corrupt the memory of a process, additional integrity checks can prevent them from executing arbitrary code. In this blog post, we want to discuss our work to enable CFI in V8.

    Background

    The popularity of Chrome makes it a valuable target for 0-day attacks and most in-the-wild exploits we’ve seen target V8 to gain initial code execution. V8 exploits typically follow a similar pattern: an initial bug leads to memory corruption but often the initial corruption is limited and the attacker has to find a way to arbitrarily read/write in the whole address space. This allows them to hijack the control-flow and run shellcode that executes the next step of the exploit chain that will try to break out of the Chrome sandbox.

    To prevent the attacker from turning memory corruption into shellcode execution, we’re implementing control-flow integrity in V8. This is especially challenging in the presence of a JIT compiler. If you turn data into machine code at runtime, you now need to ensure that corrupted data can’t turn into malicious code. Fortunately, modern hardware features provide us with the building blocks to design a JIT compiler that is robust even while processing corrupted memory.

    Following, we’ll look at the problem divided into three separate parts:

    • Forward-Edge CFI verifies the integrity of indirect control-flow transfers such as function pointer or vtable calls.
    • Backward-Edge CFI needs to ensure that return addresses read from the stack are valid.
    • JIT Memory Integrity validates all data that is written to executable memory at runtime.

    Forward-Edge CFI

    There are two hardware features that we want to use to protect indirect calls and jumps: landing pads and pointer authentication.

    Landing Pads

    Landing pads are special instructions that can be used to mark valid branch targets. If enabled, indirect branches can only jump to a landing pad instruction, anything else will raise an exception.
    On ARM64 for example, landing pads are available with the Branch Target Identification (BTI) feature introduced in Armv8.5-A. BTI support is already enabled in V8.
    On x64, landing pads were introduced with the Indirect Branch Tracking (IBT) part of the Control Flow Enforcement Technology (CET) feature.

    However, adding landing pads on all potential targets for indirect branches only provides us with coarse-grained control-flow integrity and still gives attackers lots of freedom. We can further tighten the restrictions by adding function signature checks (the argument and return types at the call site must match the called function) as well as through dynamically removing unneeded landing pad instructions at runtime.
    These features are part of the recent FineIBT proposal and we hope that it can get OS adoption.

    Pointer Authentication

    Armv8.3-A introduced pointer authentication (PAC) which can be used to embed a signature in the upper unused bits of a pointer. Since the signature is verified before the pointer is used, attackers won’t be able to provide arbitrary forged pointers to indirect branches.

    Backward-Edge CFI

    To protect return addresses, we also want to make use of two separate hardware features: shadow stacks and PAC.

    Shadow Stacks

    With Intel CET’s shadow stacks and the guarded control stack (GCS) in Armv9.4-A, we can have a separate stack just for return addresses that has hardware protections against malicious writes. These features provide some pretty strong protections against return address overwrites, but we will need to deal with cases where we legitimately modify the return stack such as during optimization / deoptimization and exception handling.

    Pointer Authentication (PAC-RET)

    Similar to indirect branches, pointer authentication can be used to sign return addresses before they get pushed to the stack. This is already enabled in V8 on ARM64 CPUs.

    A side effect of using hardware support for Forward-edge and Backward-edge CFI is that it will allow us to keep the performance impact to a minimum.

    JIT Memory Integrity

    A unique challenge to CFI in JIT compilers is that we need to write machine code to executable memory at runtime. We need to protect the memory in a way that the JIT compiler is allowed to write to it but the attacker’s memory write primitive can’t. A naive approach would be to change the page permissions temporarily to add / remove write access. But this is inherently racy since we need to assume that the attacker can trigger an arbitrary write concurrently from a second thread.

    Per-thread Memory Permissions

    On modern CPUs, we can have different views of the memory permissions that only apply to the current thread and can be changed quickly in userland.
    On x64 CPUs, this can be achieved with memory protection keys (pkeys) and ARM announced the permission overlay extensions in Armv8.9-A.
    This allows us to fine-grained toggle the write access to executable memory, for example by tagging it with a separate pkey.

    The JIT pages are now not attacker writable anymore but the JIT compiler still needs to write generated code into it. In V8, the generated code lives in AssemblerBuffers on the heap which can be corrupted by the attacker instead. We could protect the AssemblerBuffers too in the same fashion, but this just shifts the problem. For example, we’d then also need to protect the memory where the pointer to the AssemblerBuffer lives.
    In fact, any code that enables write access to such protected memory constitutes CFI attack surface and needs to be coded very defensively. E.g. any write to a pointer that comes from unprotected memory is game over, since the attacker can use it to corrupt executable memory. Thus, our design goal is to have as few of these critical sections as possible and keep the code inside short and self-contained.

    Control-Flow Validation

    If we don’t want to protect all compiler data, we can assume it to be untrusted from the point of view of CFI instead. Before writing anything to executable memory, we need to validate that it doesn’t lead to arbitrary control-flow. That includes for example that the written code doesn’t perform any syscall instructions or that it doesn’t jump into arbitrary code. Of course, we also need to check that it doesn’t change the pkey permissions of the current thread. Note that we don’t try to prevent the code from corrupting arbitrary memory since if the code is corrupted we can assume the attacker already has this capability.
    To perform such validation safely, we will also need to keep required metadata in protected memory as well as protect local variables on the stack.
    We ran some preliminary tests to assess the impact of such validation on performance. Fortunately, the validation is not occurring in performance-critical code paths, and we did not observe any regressions in the jetstream or speedometer benchmarks.

    Evaluation

    Offensive security research is an essential part of any mitigation design and we’re continuously trying to find new ways to bypass our protections. Here are some examples of attacks that we think will be possible and ideas to address them.

    Corrupted Syscall Arguments

    As mentioned before, we assume that an attacker can trigger a memory write primitive concurrently to other running threads. If another thread performs a syscall, some of the arguments could then be attacker-controlled if they’re read from memory. Chrome runs with a restrictive syscall filter but there’s still a few syscalls that could be used to bypass the CFI protections.

    Sigaction for example is a syscall to register signal handlers. During our research we found that a sigaction call in Chrome is reachable in a CFI-compliant way. Since the arguments are passed in memory, an attacker could trigger this code path and point the signal handler function to arbitrary code. Luckily, we can address this easily: either block the path to the sigaction call or block it with a syscall filter after initialization.

    Other interesting examples are the memory management syscalls. For example, if a thread calls munmap on a corrupted pointer, the attacker could unmap read-only pages and a consecutive mmap call can reuse this address, effectively adding write permissions to the page.
    Some OSes already provide protections against this attack with memory sealing: Apple platforms provide the VM_FLAGS_PERMANENT flag and OpenBSD has an mimmutable syscall.

    Signal Frame Corruption

    When the kernel executes a signal handler, it will save the current CPU state on the userland stack. A second thread could corrupt the saved state which will then get restored by the kernel.
    Protecting against this in user space seems difficult if the signal frame data is untrusted. At that point one would have to always exit or overwrite the signal frame with a known save state to return to.
    A more promising approach would be to protect the signal stack using per-thread memory permissions. For example, a pkey-tagged sigaltstack would protect against malicious overwrites, but it would require the kernel to temporarily allow write permissions when saving the CPU state onto it.

    v8CTF

    These were just a few examples of potential attacks that we’re working on addressing and we also want to learn more from the security community. If this interests you, try your hand at the recently launched v8CTF! Exploit V8 and gain a bounty, exploits targeting n-day vulnerabilities are explicitly in scope!

  • The V8 Sandbox

    The V8 Sandbox

    After almost three years since the initial design document and hundreds of CLs in the meantime, the V8 Sandbox — a lightweight, in-process sandbox for V8 — has now progressed to the point where it is no longer considered an experimental security feature. Starting today, the V8 Sandbox is included in Chrome’s Vulnerability Reward Program (VRP). While there are still a number of issues to resolve before it becomes a strong security boundary, the VRP inclusion is an important step in that direction. Chrome 123 could therefore be considered to be a sort of “beta” release for the sandbox. This blog post uses this opportunity to discuss the motivation behind the sandbox, show how it prevents memory corruption in V8 from spreading within the host process, and ultimately explain why it is a necessary step towards memory safety.

    Motivation

    Memory safety remains a relevant problem: all Chrome exploits caught in the wild in the last three years (2021 – 2023) started out with a memory corruption vulnerability in a Chrome renderer process that was exploited for remote code execution (RCE). Of these, 60% were vulnerabilities in V8. However, there is a catch: V8 vulnerabilities are rarely “classic” memory corruption bugs (use-after-frees, out-of-bounds accesses, etc.) but instead subtle logic issues which can in turn be exploited to corrupt memory. As such, existing memory safety solutions are, for the most part, not applicable to V8. In particular, neither switching to a memory safe language, such as Rust, nor using current or future hardware memory safety features, such as memory tagging, can help with the security challenges faced by V8 today.

    To understand why, consider a highly simplified, hypothetical JavaScript engine vulnerability: the implementation of JSArray::fizzbuzz(), which replaces values in the array that are divisible by 3 with “fizz”, divisible by 5 with “buzz”, and divisible by both 3 and 5 with “fizzbuzz”. Below is an implementation of that function in C++. JSArray::buffer_ can be thought of as a JSValue*, that is, a pointer to an array of JavaScript values, and JSArray::length_ contains the current size of that buffer.

    for (int index = 0; index < length_; index++) {
      JSValue js_value = buffer_[index];
      int value = ToNumber(js_value).int_value();
      if (value % 15 == 0)
        buffer_[index] = JSString("fizzbuzz");
      else if (value % 5 == 0)
        buffer_[index] = JSString("buzz");
      else if (value % 3 == 0)
        buffer_[index] = JSString("fizz");
    }

    Seems simple enough? However, there’s a somewhat subtle bug here: the ToNumber conversion in line 3 can have side effects as it may invoke user-defined JavaScript callbacks. Such a callback could then shrink the array, thereby causing an out-of-bounds write afterwards. The following JavaScript code would likely cause memory corruption:

    let array = new Array(100);
    let evil = { [Symbol.toPrimitive]() { array.length = 1; return 15; } };
    array.push(evil);
    // At index 100, the @@toPrimitive callback of |evil| is invoked in
    // line 3 above, shrinking the array to length 1 and reallocating its
    // backing buffer. The subsequent write (line 5) goes out-of-bounds.
    array.fizzbuzz();

    Note that this vulnerability could occur both in hand-written runtime code (as in the example above) or in machine code generated at runtime by an optimizing just-in-time (JIT) compiler (if the function was implemented in JavaScript instead). In the former case, the programmer would conclude that an explicit bounds-check for the store operations is not necessary as that index has just been accessed. In the latter case, it would be the compiler drawing the same incorrect conclusion during one of its optimization passes (for example redundancy elimination or bounds-check elimination) because it doesn’t model the side effects of ToNumber() correctly.

    While this is an artificially simple bug (this specific bug pattern has become mostly extinct by now due to improvements in fuzzers, developer awareness, and researcher attention), it is still useful to understand why vulnerabilities in modern JavaScript engines are difficult to mitigate in a generic way. Consider the approach of using a memory safe language such as Rust, where it is the compiler’s responsibility to guarantee memory safety. In the above example, a memory safe language would likely prevent this bug in the hand-written runtime code used by the interpreter. However, it would not prevent the bug in any just-in-time compiler as the bug there would be a logic issue, not a “classic” memory corruption vulnerability. Only the code generated by the compiler would actually cause any memory corruption. Fundamentally, the issue is that memory safety cannot be guaranteed by the compiler if a compiler is directly part of the attack surface.

    Similarly, disabling the JIT compilers would also only be a partial solution: historically, roughly half of the bugs discovered and exploited in V8 affected one of its compilers while the rest were in other components such as runtime functions, the interpreter, the garbage collector, or the parser. Using a memory-safe language for these components and removing JIT compilers could work, but would significantly reduce the engine’s performance (ranging, depending on the type of workload, from 1.5–10× or more for computationally intensive tasks).

    Now consider instead popular hardware security mechanisms, in particular memory tagging. There are a number of reasons why memory tagging would similarly not be an effective solution. For example, CPU side channels, which can easily be exploited from JavaScript, could be abused to leak tag values, thereby allowing an attacker to bypass the mitigation. Furthermore, due to pointer compression, there is currently no space for the tag bits in V8’s pointers. As such, the entire heap region would have to be tagged with the same tag, making it impossible to detect inter-object corruption. As such, while memory tagging can be very effective on certain attack surfaces, it is unlikely to represent much of a hurdle for attackers in the case of JavaScript engines.

    In summary, modern JavaScript engines tend to contain complex, 2nd-order logic bugs which provide powerful exploitation primitives. These cannot be effectively protected by the same techniques used for typical memory-corruption vulnerabilities. However, nearly all vulnerabilities found and exploited in V8 today have one thing in common: the eventual memory corruption necessarily happens inside the V8 heap because the compiler and runtime (almost) exclusively operate on V8 HeapObject instances. This is where the sandbox comes into play.

    The V8 (Heap) Sandbox

    The basic idea behind the sandbox is to isolate V8’s (heap) memory such that any memory corruption there cannot “spread” to other parts of the process’ memory.

    As a motivating example for the sandbox design, consider the separation of user- and kernel space in modern operating systems. Historically, all applications and the operating system’s kernel would share the same (physical) memory address space. As such, any memory error in a user application could bring down the whole system by, for example, corrupting kernel memory. On the other hand, in a modern operating system, each userland application has its own dedicated (virtual) address space. As such, any memory error is limited to the application itself, and the rest of the system is protected. In other words, a faulty application can crash itself but not affect the rest of the system. Similarly, the V8 Sandbox attempts to isolate the untrusted JavaScript/WebAssembly code executed by V8 such that a bug in V8 does not affect the rest of the hosting process.

    In principle, the sandbox could be implemented with hardware support: similar to the userland-kernel split, V8 would execute some mode-switching instruction when entering or leaving sandboxed code, which would cause the CPU to be unable to access out-of-sandbox memory. In practice, no suitable hardware feature is available today, and the current sandbox is therefore implemented purely in software.

    The basic idea behind the software-based sandbox is to replace all data types that can access out-of-sandbox memory with “sandbox-compatible” alternatives. In particular, all pointers (both to objects on the V8 heap or elsewhere in memory) and 64-bit sizes must be removed as an attacker could corrupt them to subsequently access other memory in the process. This implies that memory regions such as the stack cannot be inside the sandbox as they must contain pointers (for example return addresses) due to hardware and OS constraints. As such, with the software-based sandbox, only the V8 heap is inside the sandbox, and the overall construction is therefore not unlike the sandboxing model used by WebAssembly.

    To understand how this works in practice, it is useful to look at the steps an exploit has to perform after corrupting memory. The goal of an RCE exploit would typically be to perform a privilege escalation attack, for example by executing shellcode or performing a return-oriented programming (ROP)-style attack. For either of these, the exploit will first want the ability to read and write arbitrary memory in the process, for example to then corrupt a function pointer or place a ROP-payload somewhere in memory and pivot to it. Given a bug that corrupts memory on the V8 heap, an attacker would therefore look for an object such as the following:

    class JSArrayBuffer: public JSObject {
      private:
        byte* buffer_;
        size_t size_;
    };

    Given this, the attacker would then either corrupt the buffer pointer or the size value to construct an arbitrary read/write primitive. This is the step that the sandbox aims to prevent. In particular, with the sandbox enabled, and assuming that the referenced buffer is located inside the sandbox, the above object would now become:

    class JSArrayBuffer: public JSObject {
      private:
        sandbox_ptr_t buffer_;
        sandbox_size_t size_;
    };

    Where sandbox_ptr_t is a 40-bit offset (in the case of a 1TB sandbox) from the base of the sandbox. Similarly, sandbox_size_t is a “sandbox-compatible” size, currently limited to 32GB.
    Alternatively, if the referenced buffer was located outside of the sandbox, the object would instead become:

    class JSArrayBuffer: public JSObject {
      private:
        external_ptr_t buffer_;
    };

    Here, an external_ptr_t references the buffer (and its size) through a pointer table indirection (not unlike the file descriptor table of a unix kernel or a WebAssembly.Table) which provides memory safety guarantees.

    In both cases, an attacker would find themselves unable to “reach out” of the sandbox into other parts of the address space. Instead, they would first need an additional vulnerability: a V8 Sandbox bypass. The following image summarizes the high-level design, and the interested reader can find more technical details about the sandbox in the design documents linked from src/sandbox/README.md.

    A high-level diagram of the sandbox design

    Solely converting pointers and sizes to a different representation is not quite sufficient in an application as complex as V8 and there are a number of other issues that need to be fixed. For example, with the introduction of the sandbox, code such as the following suddenly becomes problematic:

    std::vector<std::string> JSObject::GetPropertyNames() {
        int num_properties = TotalNumberOfProperties();
        std::vector<std::string> properties(num_properties);
    
        for (int i = 0; i < NumberOfInObjectProperties(); i++) {
            properties[i] = GetNameOfInObjectProperty(i);
        }
    
        // Deal with the other types of properties
        // ...

    This code makes the (reasonable) assumption that the number of properties stored directly in a JSObject must be less than the total number of properties of that object. However, assuming these numbers are simply stored as integers somewhere in the JSObject, an attacker could corrupt one of them to break this invariant. Subsequently, the access into the (out-of-sandbox) std::vector would go out of bounds. Adding an explicit bounds check, for example with an SBXCHECK, would fix this.

    Encouragingly, nearly all “sandbox violations” discovered so far are like this: trivial (1st order) memory corruption bugs such as use-after-frees or out-of-bounds accesses due to lack of a bounds check. Contrary to the 2nd order vulnerabilities typically found in V8, these sandbox bugs could actually be prevented or mitigated by the approaches discussed earlier. In fact, the particular bug above would already be mitigated today due to Chrome’s libc++ hardening. As such, the hope is that in the long run, the sandbox becomes a more defensible security boundary than V8 itself. While the currently available data set of sandbox bugs is very limited, the VRP integration launching today will hopefully help produce a clearer picture of the type of vulnerabilities encountered on the sandbox attack surface.

    Performance

    One major advantage of this approach is that it is fundamentally cheap: the overhead caused by the sandbox comes mostly from the pointer table indirection for external objects (costing roughly one additional memory load) and to a lesser extent from the use of offsets instead of raw pointers (costing mostly just a shift+add operation, which is very cheap). The current overhead of the sandbox is therefore only around 1% or less on typical workloads (measured using the Speedometer and JetStream benchmark suites). This allows the V8 Sandbox to be enabled by default on compatible platforms.

    Testing

    A desirable feature for any security boundary is testability: the ability to manually and automatically test that the promised security guarantees actually hold in practice. This requires a clear attacker model, a way to “emulate” an attacker, and ideally a way of automatically determining when the security boundary has failed. The V8 Sandbox fulfills all of these requirements:

    1. A clear attacker model: it is assumed that an attacker can read and write arbitrarily inside the V8 Sandbox. The goal is to prevent memory corruption outside of the sandbox.
    2. A way to emulate an attacker: V8 provides a “memory corruption API” when built with the v8_enable_memory_corruption_api = true flag. This emulates the primitives obtained from typical V8 vulnerabilities and in particular provides full read- and write access inside the sandbox.
    3. A way to detect “sandbox violations”: V8 provides a “sandbox testing” mode (enabled via either --sandbox-testing or --sandbox-fuzzing) which installs a signal handler that determines if a signal such as SIGSEGV represents a violation of the sandbox’s security guarantees.

    Ultimately, this allows the sandbox to be integrated into Chrome’s VRP program and be fuzzed by specialized fuzzers.

    Usage

    The V8 Sandbox must be enabled/disabled at build time using the v8_enable_sandbox build flag. It is (for technical reasons) not possible to enable/disable the sandbox at runtime. The V8 Sandbox requires a 64-bit system as it needs to reserve a large amount of virtual address space, currently one terabyte.

    The V8 Sandbox has already been enabled by default on 64-bit (specifically x64 and arm64) versions of Chrome on Android, ChromeOS, Linux, macOS, and Windows for roughly the last two years. Even though the sandbox was (and still is) not feature complete, this was mainly done to ensure that it does not cause stability issues and to collect real-world performance statistics. Consequently, recent V8 exploits already had to work their way past the sandbox, providing helpful early feedback on its security properties.

    Conclusion

    The V8 Sandbox is a new security mechanism designed to prevent memory corruption in V8 from impacting other memory in the process. The sandbox is motivated by the fact that current memory safety technologies are largely inapplicable to optimizing JavaScript engines. While these technologies fail to prevent memory corruption in V8 itself, they can in fact protect the V8 Sandbox attack surface. The sandbox is therefore a necessary step towards memory safety.

  • WordPress 6.8 正式版发布,优化网站性能

    WordPress 6.8 正式版发布,优化网站性能

    WordPress 6.8 完善并优化了您日常使用的工具,使您的网站运行速度更快、更安全、更易于管理。样式表现在采用结构化布局,并兼容经典主题,让您能够更好地控制全局样式。推测加载功能通过在用户导航到链接之前预加载链接来加快导航速度,bcrypt 哈希算法可自动增强密码安全性,数据库优化则可提升性能。

    文章目录

    样式书变得更加简洁,并且增加了一些新技巧

    样式书具有新的结构化布局和更清晰的标签,可以更轻松地在一个地方编辑颜色、排版(几乎所有网站样式)。

    此外,现在您可以在包含 editor-styles 或 theme.json 文件的经典主题中看到它。在“外观”>“设计”下找到“样式书”,并在编辑 CSS 或在定制器中进行更改时使用它来预览主题的演变。

    编辑器改进

    更轻松地查看数据视图中的选项,并可以从查询循环中排除置顶帖子。此外,编辑器中还有许多小改进,让您构建一切更加顺畅。

    得益于推测加载,页面加载几乎是即时的

    在 WordPress 6.8 中,页面加载速度比以往任何时候都快。当您或您的用户将鼠标悬停在链接上或点击链接时,WordPress 可能会预加载下一页,从而带来更流畅、近乎即时的体验。该系统会平衡速度和效率,您可以通过插件或自定义代码来控制其运行方式。此功能仅适用于现代浏览器——旧版浏览器会忽略它,不会产生任何影响。

    使用 bcrypt 增强密码安全性

    现在,使用 bcrypt 哈希算法,密码更难破解,这需要更强大的计算能力才能破解。这增强了整体安全性,WordPress 的其他加密改进也同样如此。您无需执行任何操作,所有内容都会自动更新。

    了解更多:WordPress 6.8 将使用 bcrypt 进行密码哈希处理

    辅助功能改进

    100 多项无障碍修复和增强功能,涵盖 WordPress 的广泛体验。此版本修复了所有捆绑主题,改进了导航菜单管理、自定义工具,并简化了标签功能。块编辑器针对块、数据视图及其整体用户体验进行了 70 多项改进。

    性能更新

    WordPress 6.8 包含一系列性能修复和增强功能,旨在提升从编辑到浏览的各项功能。除了预测加载之外,WordPress 6.8 还特别关注了块编辑器、块类型注册和查询缓存。此外,想象一下,任何交互的等待时间都不会超过 50 毫秒。在 WordPress 6.8 中,Interactivity API 朝着这一目标迈出了第一步。

    文章来自wordpress大学