博客

  • Akamai成为Linux内核基础架构合作伙伴

    Akamai成为Linux内核基础架构合作伙伴

    2025年4月22日,全球云服务提供商Akamai宣布其现已成为 Linux 内核开发工作的新基础架构合作伙伴。Akamai 将通过其云计算服务和内容交付网络 (CDN) 为 kernel.org 提供支持。

    赋能开发者共同体的可持续发展

    Akamai 与 Linux Kernel Organization 签订了一项多年合作协议,未来将为 Linux 项目及其背后的开发者网络提供基础设施支持。Linux 的开发者中有许多都是志愿者,他们的持续贡献对于保障系统安全性、性能和可用性至关重要。如今,全球政府、科研机构、非营利组织以及企业广泛使用 Linux 系统,涵盖从智能手机、工业设备到云数据中心和金融交易平台等多个关键领域。

    Linux 内核庞大而复杂,目前代码量已超过 2800 万行。自 2005 年以来,来自超过 1300 家企业的 13,500 多名开发者为其做出贡献。开发者们不断迭代内核代码,并将其分发给构建各种 Linux 发行版的技术团队。Akamai 提供的基础架构服务将确保他们能够安全、高效地访问这些源代码。

    开源共生:从技术反哺到生态繁荣

    Akamai Cloud 首席架构师兼 CNCF 技术监督委员会成员 Alex Chircop 表示:“Akamai 的平台本身就建立在 Linux 和开源技术之上。我们为 kernel.org 提供支持,是对社区的回馈,也延续了我们对 CNCF 的承诺。这正是 Linode 中 ‘Lin’ 的来源——对 Linux 的致敬。”

    Akamai 的开源承诺不仅止步于 Linux。CNCF 首席技术官 Chris Aniszczyk 表示:“Akamai 是开源社区的重要一员。无论是对 OpenTelemetry、Argo和Prometheus 等关键项目的贡献,还是向 CNCF 项目捐赠 100 万美元的基础设施额度,都体现了他们对开源人才和项目的坚定支持。”

    作为 CNCF 的金牌会员以及 KubeCon 的白金赞助商,Akamai在 2022 年收购了 Linux 云服务先锋 Linode,随后于 2023 年又将 Kubernetes 原生存储平台 Ondat 收入麾下,进一步扩展了其开源与云原生的实力。目前,Akamai 旗下的 Linode Kubernetes Engine 已获得 CNCF 的 Kubernetes 一致性认证,是一款全面托管的容器编排平台,致力于简化云原生应用的部署与管理。

    作为对其开源支持承诺的一部分,Akamai 还宣布将为备受欢迎的Linux 发行版之一 Alpine Linux 提供基础架构与交付支持。

    通过为Linux内核及Alpine Linux提供基础设施支持,Akamai正在帮助全球开发者更高效地协调合作。这种基于实际技术能力的回馈方式,不仅提升了开源协作效率,更展现了商业公司与开源社区协同发展的可持续模式。未来,Akamai依然会持续地回馈开源社区,助力开源生态繁荣发展。

  • 苹果新专利:事件相机系统

    苹果新专利:事件相机系统

    近日有技术资料显示,苹果公司获得了一项新的专利授权,涉及一种先进的事件相机系统方案,未来可能应用于包括 MacBook 在内的多种设备。

    目前常见的相机在捕捉动态场景时存在局限,特别是在处理高速动作方面表现不足。而事件相机技术则能以像素为单位记录画面变化,实现更快的数据响应速度。不过,在复杂场景中,大量非目标信息可能影响识别效率。

    针对这一问题,苹果提出了一种解决方案:通过融合传统帧相机的信息,锁定关键区域,例如用户的手部,从而更高效地处理有效数据。该方法利用帧相机图像划定感兴趣区域,比如用“边界框”来定位手部位置,过滤掉无关背景信息,只对目标区域内发生的事件进行分析处理。

    除此之外,事件相机还支持红外波段探测,有助于降低环境干扰。系统也可以通过分析连续的事件区块来判断手指或手掌的移动轨迹,相比单独解析每个事件,这种方法在准确性和效率方面更具优势。

    这项技术的应用范围并不仅限于笔记本电脑。根据资料显示,其未来还有望应用于智能手表、智能手机、平板电脑、穿戴式显示设备以及家庭智能装置等多种产品当中。

  • 常用开源协议介绍

    常用开源协议介绍

    开源协议别名LGPL许可证。LGPL许可证是LESSER GENERAL PUBLIC LICENSE的简写,也叫LIBRARY GENERAL PUBLIC LICENSE,中文译为“较宽松公共许可证”或者“函数库公共许可证”。该许可证适用于一些由自由软件基金会与其它决定使用此许可证的软件作者所特殊设计的软件包─比如函数库(即Library)。

    除了大家比较熟悉的GPL协议之外,开源界还有很多许可证,如LGPL许可证、BSD许可证等,下面就来一一介绍。

    LGPL许可证,也是自由软件联盟GNU开源软件许可证的一种,大部分的 GNU软件,包括一些函数库,是受到原来的 GPL许可证保护的。而LGPL许可证,适用于特殊设计的函数库,且与原来的通用公共许可证有很大的不同,给予了被许可人较为宽松的权利,所以叫“较宽松公共许可证”。在特定的函数库中使用它,以准许非自由的程序可以与这些函数库连结。

    当一个程序与一个函数库连结,不论是静态连结或使用共享函数库,二者的结合可以合理地说是结合的作品,一个原来的函数库的衍生品。因此,原来的通用公共许可证只有在整个结合品满足其自由的标准时,才允许连结。较宽松通用公共许可则以更宽松的标准允许其它程序代码与本函数库连结。例如,在少数情况下,可能会有特殊的需要而鼓励大家尽可能广泛地使用特定的函数库,因而使它成为实际上的标准。为了达到此目标,必须允许非自由的程序使用此函数库。一个较常发生的情况是,一个自由的函数库与一个被广泛使用的非自由函数库做相同的工作,在此情况下,限制只有自由软件可以使用此自由函数库不会有多少好处,故我们使用了LGPL许可证。

    在其他情况下,允许非自由程序使用特定的函数库,可以让更多的人们使用自由软件的大部分。例如,允许非自由程序使用GNU C函数库,可以让更多的人们使用整个GNU作业系统,以及它的变形,GNU/Linux操作系统。

    尽管LGPL许可证对使用者的自由保护是较少的,但它却能确保与此函数库连结的程序的使用者拥有自由,而且具有使用修改过的函数库版本来执行该程序的必要方法。

    MPL

    MPL是The Mozilla Public License的简写,是1998年初Netscape的 Mozilla小组为其开源软件项目设计的软件许可证。MPL许可证出现的最重要原因就是,Netscape公司认为GPL许可证没有很好地平衡开发者对源代码的需求和他们利用源代码获得的利益。同著名的GPL许可证和BSD许可证相比,MPL在许多权利与义务的约定方面与它们相同(因为都是符合OSIA认定的开源软件许可证)。但是,相比而言MPL还有以下几个显著的不同之处:

    ◆ MPL虽然要求对于经MPL许可证发布的源代码的修改也要以MPL许可证的方式再许可出来,以保证其他人可以在MPL的条款下共享源代码。但是,在MPL许可证中对“发布”的定义是“以源代码方式发布的文件”,这就意味着MPL允许一个企业在自己已有的源代码库上加一个接口,除了接口程序的源代码以MPL许可证的形式对外许可外,源代码库中的源代码就可以不用MPL许可证的方式强制对外许可。这些,就为借鉴别人的源代码用做自己商业软件开发的行为留了一个活口。

    ◆ MPL许可证第三条第7款中允许被许可人将经过MPL许可证获得的源代码同自己其他类型的代码混合得到自己的软件程序。

    ◆ 对软件专利的态度,MPL许可证不像GPL许可证那样明确表示反对软件专利,但是却明确要求源代码的提供者不能提供已经受专利保护的源代码(除非他本人是专利权人,并书面向公众免费许可这些源代码),也不能在将这些源代码以开放源代码许可证形式许可后再去申请与这些源代码有关的专利。

    ◆ 对源代码的定义

    而在MPL(1.1版本)许可证中,对源代码的定义是:“源代码指的是对作品进行修改最优先择取的形式,它包括:所有模块的所有源程序,加上有关的接口的定义,加上控制可执行作品的安装和编译的‘原本’(原文为‘Script’),或者不是与初始源代码显著不同的源代码就是被源代码贡献者选择的从公共领域可以得到的程序代码。”

    ◆ MPL许可证第3条有专门的一款是关于对源代码修改进行描述的规定,就是要求所有再发布者都得有一个专门的文件就对源代码程序修改的时间和修改的方式有描述。

    BSD

    BSD开源协议是一个给于使用者很大自由的协议。基本上使用者可以”为所欲为”,可以自由的使用,修改源代码,也可以将修改后的代码作为开源或者专有软件再发布。

    但”为所欲为”的前提当你发布使用了BSD协议的代码,或者以BSD协议代码为基础做二次开发自己的产品时,需要满足三个条件:

    ◆如果再发布的产品中包含源代码,则在源代码中必须带有原来代码中的BSD协议。

    ◆如果再发布的只是二进制类库/软件,则需要在类库/软件的文档和版权声明中包含原来代码中的BSD协议。

    ◆不可以用开源代码的作者/机构名字和原来产品的名字做市场推广。

    BSD 代码鼓励代码共享,但需要尊重代码作者的著作权。BSD由于允许使用者修改和重新发布代码,也允许使用或在BSD代码上开发商业软件发布和销售,因此是对 商业集成很友好的协议。而很多的公司企业在选用开源产品的时候都首选BSD协议,因为可以完全控制这些第三方的代码,在必要的时候可以修改或者二次开发

    GPL

    我们很熟悉的Linux就是采用了GPLGPL协议BSD, Apache Licence等鼓励代码重用的许可很不一样。GPL的出发点是代码的开源/免费使用和引用/修改/衍生代码的开源/免费使用,但不允许修改后和衍生的代 码做为闭源商业软件发布和销售。这也就是为什么我们能用免费的各种linux,包括商业公司的linux和linux上各种各样的由个人,组织,以及商 业软件公司开发的免费软件了。

    GPL协议的主要内容是只要在一个软件中使用(”使用”指类库引用,修改后的代码或者衍生代码)GPL 协议的产品,则该软件产品必须也采用GPL协议,既必须也是开源和免费。这就是所谓的”传染性”。GPL协议的产品作为一个单独的产品使用没有任何问题, 还可以享受免费的优势。

    由于GPL严格要求使用了GPL类库的软件产品必须使用GPL协议,对于使用GPL协议的开源代码,商业软件或者对代码有保密要求的部门就不适合集成/采用作为类库和二次开发的基础。

    其它细节如再发布的时候需要伴随GPL协议等和BSD/Apache等类似。

    MIT

    MIT是和BSD一样宽范的许可协议,作者只想保留版权,而无任何其它的限制。也就是说,你必须在你的发行版里包含原许可协议的声明,无论你是以二进制发布的还是以源代码发布的。MIT协议又称麻省理工学院许可证,最初由麻省理工学院开发。被授权人权利:1、被授权人有权利使用、复制、修改、合并、出版发行、散布、再授权及贩售软件及软件的副本。2、被授权人可根据程式的需要修改授权条款为适当的内容。被授权人义务:在软件和软件的所有副本中都必须包含版权声明和许可声明。

    AL2.0

    Apache Licence是著名的非盈利开源组织Apache采用的协议。该协议和BSD类似,同样鼓励代码共享和尊重原作者的著作权,同样允许代码修改,再发布(作为开源或商业软件)。需要满足的条件也和BSD类似:

    ◆需要给代码的用户一份Apache Licence

    ◆如果你修改了代码,需要在被修改的文件中说明。

    ◆在延伸的代码中(修改和有源代码衍生的代码中)需要带有原来代码中的协议,商标,专利声明和其他原来作者规定需要包含的说明。

    ◆如果再发布的产品中包含一个Notice文件,则在Notice文件中需要带有Apache Licence。你可以在Notice中增加自己的许可,但不可以表现为对Apache Licence构成更改。

    Apache Licence也是对商业应用友好的许可。使用者也可以在需要的时候修改代码来满足需要并作为开源或商业产品发布/销售。

  • 怎么上传SVG到WordPress

    怎么上传SVG到WordPress

    众所周知,WP天生不支持SVG文件上传,尝试上传SVG你会看到如下错误

    我们怎么解决呢? 两种方法,一种下载插件如 SVG Support,另一种利用WP提供的钩子upload_mimes

    • 进到Panel或者其他文件编辑器 wp根目录-> 文件管理.
    • 编辑 wp-includes/functions.php文件
    function add_file_types_to_uploads($file_types){
      $new_filetypes = array();
      $new_filetypes['svg'] = 'image/svg+xml';
      $file_types = array_merge($file_types, $new_filetypes );
      return $file_types;
    }
    add_filter('upload_mimes', 'add_file_types_to_uploads');

    把上面的代码粘贴到文件最后一行,刷新后台就可以上传了。

  • Control-flow Integrity in V8

    Control-flow Integrity in V8

    Published 09 October 2023 · Tagged with security

    Control-flow integrity (CFI) is a security feature aiming to prevent exploits from hijacking control-flow. The idea is that even if an attacker manages to corrupt the memory of a process, additional integrity checks can prevent them from executing arbitrary code. In this blog post, we want to discuss our work to enable CFI in V8.

    Background

    The popularity of Chrome makes it a valuable target for 0-day attacks and most in-the-wild exploits we’ve seen target V8 to gain initial code execution. V8 exploits typically follow a similar pattern: an initial bug leads to memory corruption but often the initial corruption is limited and the attacker has to find a way to arbitrarily read/write in the whole address space. This allows them to hijack the control-flow and run shellcode that executes the next step of the exploit chain that will try to break out of the Chrome sandbox.

    To prevent the attacker from turning memory corruption into shellcode execution, we’re implementing control-flow integrity in V8. This is especially challenging in the presence of a JIT compiler. If you turn data into machine code at runtime, you now need to ensure that corrupted data can’t turn into malicious code. Fortunately, modern hardware features provide us with the building blocks to design a JIT compiler that is robust even while processing corrupted memory.

    Following, we’ll look at the problem divided into three separate parts:

    • Forward-Edge CFI verifies the integrity of indirect control-flow transfers such as function pointer or vtable calls.
    • Backward-Edge CFI needs to ensure that return addresses read from the stack are valid.
    • JIT Memory Integrity validates all data that is written to executable memory at runtime.

    Forward-Edge CFI

    There are two hardware features that we want to use to protect indirect calls and jumps: landing pads and pointer authentication.

    Landing Pads

    Landing pads are special instructions that can be used to mark valid branch targets. If enabled, indirect branches can only jump to a landing pad instruction, anything else will raise an exception.
    On ARM64 for example, landing pads are available with the Branch Target Identification (BTI) feature introduced in Armv8.5-A. BTI support is already enabled in V8.
    On x64, landing pads were introduced with the Indirect Branch Tracking (IBT) part of the Control Flow Enforcement Technology (CET) feature.

    However, adding landing pads on all potential targets for indirect branches only provides us with coarse-grained control-flow integrity and still gives attackers lots of freedom. We can further tighten the restrictions by adding function signature checks (the argument and return types at the call site must match the called function) as well as through dynamically removing unneeded landing pad instructions at runtime.
    These features are part of the recent FineIBT proposal and we hope that it can get OS adoption.

    Pointer Authentication

    Armv8.3-A introduced pointer authentication (PAC) which can be used to embed a signature in the upper unused bits of a pointer. Since the signature is verified before the pointer is used, attackers won’t be able to provide arbitrary forged pointers to indirect branches.

    Backward-Edge CFI

    To protect return addresses, we also want to make use of two separate hardware features: shadow stacks and PAC.

    Shadow Stacks

    With Intel CET’s shadow stacks and the guarded control stack (GCS) in Armv9.4-A, we can have a separate stack just for return addresses that has hardware protections against malicious writes. These features provide some pretty strong protections against return address overwrites, but we will need to deal with cases where we legitimately modify the return stack such as during optimization / deoptimization and exception handling.

    Pointer Authentication (PAC-RET)

    Similar to indirect branches, pointer authentication can be used to sign return addresses before they get pushed to the stack. This is already enabled in V8 on ARM64 CPUs.

    A side effect of using hardware support for Forward-edge and Backward-edge CFI is that it will allow us to keep the performance impact to a minimum.

    JIT Memory Integrity

    A unique challenge to CFI in JIT compilers is that we need to write machine code to executable memory at runtime. We need to protect the memory in a way that the JIT compiler is allowed to write to it but the attacker’s memory write primitive can’t. A naive approach would be to change the page permissions temporarily to add / remove write access. But this is inherently racy since we need to assume that the attacker can trigger an arbitrary write concurrently from a second thread.

    Per-thread Memory Permissions

    On modern CPUs, we can have different views of the memory permissions that only apply to the current thread and can be changed quickly in userland.
    On x64 CPUs, this can be achieved with memory protection keys (pkeys) and ARM announced the permission overlay extensions in Armv8.9-A.
    This allows us to fine-grained toggle the write access to executable memory, for example by tagging it with a separate pkey.

    The JIT pages are now not attacker writable anymore but the JIT compiler still needs to write generated code into it. In V8, the generated code lives in AssemblerBuffers on the heap which can be corrupted by the attacker instead. We could protect the AssemblerBuffers too in the same fashion, but this just shifts the problem. For example, we’d then also need to protect the memory where the pointer to the AssemblerBuffer lives.
    In fact, any code that enables write access to such protected memory constitutes CFI attack surface and needs to be coded very defensively. E.g. any write to a pointer that comes from unprotected memory is game over, since the attacker can use it to corrupt executable memory. Thus, our design goal is to have as few of these critical sections as possible and keep the code inside short and self-contained.

    Control-Flow Validation

    If we don’t want to protect all compiler data, we can assume it to be untrusted from the point of view of CFI instead. Before writing anything to executable memory, we need to validate that it doesn’t lead to arbitrary control-flow. That includes for example that the written code doesn’t perform any syscall instructions or that it doesn’t jump into arbitrary code. Of course, we also need to check that it doesn’t change the pkey permissions of the current thread. Note that we don’t try to prevent the code from corrupting arbitrary memory since if the code is corrupted we can assume the attacker already has this capability.
    To perform such validation safely, we will also need to keep required metadata in protected memory as well as protect local variables on the stack.
    We ran some preliminary tests to assess the impact of such validation on performance. Fortunately, the validation is not occurring in performance-critical code paths, and we did not observe any regressions in the jetstream or speedometer benchmarks.

    Evaluation

    Offensive security research is an essential part of any mitigation design and we’re continuously trying to find new ways to bypass our protections. Here are some examples of attacks that we think will be possible and ideas to address them.

    Corrupted Syscall Arguments

    As mentioned before, we assume that an attacker can trigger a memory write primitive concurrently to other running threads. If another thread performs a syscall, some of the arguments could then be attacker-controlled if they’re read from memory. Chrome runs with a restrictive syscall filter but there’s still a few syscalls that could be used to bypass the CFI protections.

    Sigaction for example is a syscall to register signal handlers. During our research we found that a sigaction call in Chrome is reachable in a CFI-compliant way. Since the arguments are passed in memory, an attacker could trigger this code path and point the signal handler function to arbitrary code. Luckily, we can address this easily: either block the path to the sigaction call or block it with a syscall filter after initialization.

    Other interesting examples are the memory management syscalls. For example, if a thread calls munmap on a corrupted pointer, the attacker could unmap read-only pages and a consecutive mmap call can reuse this address, effectively adding write permissions to the page.
    Some OSes already provide protections against this attack with memory sealing: Apple platforms provide the VM_FLAGS_PERMANENT flag and OpenBSD has an mimmutable syscall.

    Signal Frame Corruption

    When the kernel executes a signal handler, it will save the current CPU state on the userland stack. A second thread could corrupt the saved state which will then get restored by the kernel.
    Protecting against this in user space seems difficult if the signal frame data is untrusted. At that point one would have to always exit or overwrite the signal frame with a known save state to return to.
    A more promising approach would be to protect the signal stack using per-thread memory permissions. For example, a pkey-tagged sigaltstack would protect against malicious overwrites, but it would require the kernel to temporarily allow write permissions when saving the CPU state onto it.

    v8CTF

    These were just a few examples of potential attacks that we’re working on addressing and we also want to learn more from the security community. If this interests you, try your hand at the recently launched v8CTF! Exploit V8 and gain a bounty, exploits targeting n-day vulnerabilities are explicitly in scope!

  • 修改 WordPress 文章类型名称和分类法名称

    修改 WordPress 文章类型名称和分类法名称

    WordPress 自带文章类型的功能,比如页面、文章、媒体等都是一种文章类型,而有些主题和插件会添加额外的文章类型,我们通常将非系统自带的称之为自定义文章类型(Custom Post Type),比如 产品、项目等等。而这些自定义文章类型,通常也会注册有自己的分类法,我们通常称之为自定义分类法(Taxonomy)

    在我们做WordPress网站的时候,有时候可能需要修改文章类型以及自定义分类法的名称,对于懂代码的朋友来说,可能比较简单,但是对于普通用户来说,折腾代码真的太不保险了,所以,今天倡萌分享两个插件,可以满足普通用户的需求。

    使用 Custom Post Type Editor 修改文章类型名称

    使用 Custom Post Type Editor 插件,可以通过简单的仪表板用户界面自定义任何已注册自定义文章类型的文本名称、菜单名称或描述。无需编辑 PHP 文件!

    修改 WordPress 文章类型名称和分类法名称 - Cpt Editor 1
    Cpt Editor 1

    例如,您可以自定义以下自定义文章类型:

    • 自定义文章类型Posts(由 WordPress Core 创建)
    • 自定义文章类型Pages(由 WordPress Core 创建)
    • 自定义文章类型Media(由 WordPress Core 创建)
    • 由 WordPress 插件创建的任何自定义文章类型
    • 由 WordPress 主题创建的任何自定义文章类型

    这意味着您不再需要修改 PHP 文件来重命名自定义文章类型!

    下载 Custom Post Type Editor

    使用 Rename Taxonomies 修改分类法名称

    Rename Taxonomies 插件允许您使用简单的界面自定义任何分类法名称,无需编码。自定义的分类仅重命名,无需更改注册密钥,因此使用此插件不会引起任何分类冲突。

    • 轻松重命名任何分类法(例如,将“类别 Categories”重命名为“主题 Topics”
    • 重命名自定义分类法(由第三方插件添加)
    • 简单、直观的用户界面
    • 无需编码
    • 翻译准备
    • 兼容多语言插件
    修改 WordPress 文章类型名称和分类法名称 - Rename Taxonomies 1
    Rename Taxonomies 1

    下载 Rename Taxonomies

    文章来自wordpress大学

  • 台积电美国扩产:亏损加剧仍投建第三晶圆厂

    台积电美国扩产:亏损加剧仍投建第三晶圆厂

    台积电美国扩产:亏损加剧仍投建第三晶圆厂

    尽管位于美国的工厂在过去一年中出现了超过32亿元人民币的亏损,台积电仍持续推进在美国的扩张计划。公司高层表示,将在美国引入更先进的半导体制造产能。

    近日,台积电位于亚利桑那州的第三座晶圆厂正式动工。这一进展由台积电董事长兼总裁魏哲家在最新声明中披露,显示出公司在当地布局的决心。

    根据台积电发布的2024年度报告,虽然第一座亚利桑那工厂已实现量产,但由于从产品量产到营收确认存在时间差,导致设在美国的子公司TSMC Arizona Corporation在去年的亏损进一步扩大,亏损金额从2023年的约109.25亿元新台币上升至2024年的142.98亿元新台币。

    目前,亚利桑那工厂已获得包括苹果、英伟达、AMD、博通及高通在内的五家主要客户支持。业内普遍认为,随着这些客户逐步导入量产,配合后续第二、第三座晶圆厂的陆续投产,将有助于该地区产能逐步达到经济规模,从而缩小亏损幅度。

    按照规划,亚利桑那第一座晶圆厂已于去年第四季度开始使用4nm制程技术进行生产;第二座晶圆厂的厂房建设已告完成,目前正在进行厂务系统安装工作,包括无尘室和机电工程,预计这座工厂将采用3nm制程技术。

    至于刚刚开建的第三座晶圆厂,则被寄予更高期望,预计将采用2nm甚至更为先进的制程技术进行芯片制造。此举旨在通过在美国本土生产最高端半导体产品,以更好地应对客户需求的增长。

    文章来自中关村

  • 尼康Z5II开启影像创作技术与艺术对话新篇

    尼康Z5II开启影像创作技术与艺术对话新篇

    在摄影器材“参数内卷”的当下,尼康Z5II的诞生犹如一场静默的革命。它并非简单地堆砌像素或提升帧率,而是以“色彩科学”为原点,将影像创作升华为一场技术与艺术的深度对话。当创作者手握Z5II,按下快门的瞬间,不仅是在记录画面,更是在用尼康构建的视觉语法系统,书写属于自己的光影史诗。

    尼康Z5II开启影像创作技术与艺术对话新篇

    尼康Z5II

    电商报价¥10999¥11716¥10999

    尼康Z5II的色彩哲学,始于对“视觉语言”本质的叩问。当大多数相机将色彩简化为后期调校的参数,Z5II却选择与全球顶尖创摄者联手,将9种云优化校准方案锻造成创作者的“风格基因库”。

    当EXPEED 7影像处理器以10倍算力撕裂传统影像的时空维度,进化为预判创作者意图的“视觉先知”。支持30幅/秒C30模式的高速连拍,配合0.02秒的自动对焦速度,让Z5II能捕捉到人类视觉难以察觉的瞬间。婚礼跟拍中,新郎拭泪的指尖颤动、新娘头纱扬起的弧度,都被凝固为永恒的视觉诗篇。

    在Z5II的镜头下,人像摄影突破了“真实复刻”的局限,进化为对“美”的重新定义。结合尼康的AI肤色算法,Z5II能根据环境光色温自动调整肤色倾向。在暖调夕阳下,肤色呈现蜂蜜般的温润质感;冷调月光中,则泛起珍珠母贝的微光。这种对“色彩情绪”的极致把控,让每一张人像都成为创作者与模特的视觉共谋。

    尼康Z5II开启影像创作技术与艺术对话新篇

    其以色彩、速度、视角为支点,构建起一个开放的创作生态系统。结合尼克尔Z 28-400mm f/4-8 VR镜头,可以14.3倍变焦覆盖从城市全景到野生动物特写的全场景需求。VR减震技术让长焦端手持拍摄如履平地,创作者无需被“换镜焦虑”打断叙事节奏。

    在摄影创作的漫漫征途中,尼康Z5II宛如一位贴心且深谙艺术的挚友,以独特的方式为创作者们带来了重新定义“创作自由”的珍贵契机。它摒弃了传统器材的冰冷与生硬,化作一位沉默而温暖的视觉导师,静静地陪伴在创作者身旁。选择Z5II助力创作者定格每一个唯美时刻。

    尼康Z5II

    主要性能产品型号Z5II上市日期2025年04月产品类型微单操作方式全自动操作传感器类型背照式CMOS传感器尺寸全画幅(35.9*23.9mm)传感器描述影像传感器格式:FX最大像素数2528万有效像素2450万影像处理器EXPEED 7最高分辨率6048×4032图像分辨率[FX (36 x 24)]影像区域:
    (L) 6048 x 4032 (约2,440万), (M) 4528 x 3024 (约1,370万), (S) 3024 x 2016 (约610万)
    [DX (24 x 16)]影像区域:
    (L) 3984 x 2656 (约1,060万), (M) 2976 x 1992 (约590万), (S) 1984 x 1328 (约260万)
    [1:1 (24 x 24)]影像区域:
    (L) 4032 x 4032 (约1,630万), (M) 3024 x 3024 (约910万), (S) 2016 x 2016 (约410万)
    [16:9 (36 x 20)]影像区域:
    (L) 6048 x 3400 (约2,060万), (M) 4528 x 2544 (约1,150万), (S) 3024 x 1696 (约510万)高清摄像4K超高清视频(2160)适用对象入门级
    镜头特点镜头卡口尼康Z卡口对焦方式自动对焦对焦点数273个(单次伺服AF),299个(自动区域AF)
    显示功能显示屏类型触摸屏,翻转屏显示屏尺寸3.2英寸显示屏像素210万像素液晶屏特性约170°可视角度、约100% 画面覆盖率的可翻转TFT LCD触摸屏,具备色彩平衡和15档手动亮度控制实时取景支持取景器类型电子取景器描述约1.27cm(约0.5英寸)、约369万画点(Quad VGA)OLED电子取景器,可调整色彩平衡,具备自动以及13档手动亮度控制
    画面覆盖率:水平和垂直约100%
    放大倍率:约0.8倍(50mm 镜头设为无限远,屈光度为-1.0m)
    视点:21mm (屈光度为-1.0m;距离取景器接目镜表面中心)
    屈光度调节:-4至+2m
    眼感应:在显示屏和取景器显示之间自动切换
    快门性能快门类型电子控制纵走式焦平面快门,电子前帘快门,电子快门快门速度1/8000至30秒(以1/3EV为增量,M模式下可扩展至900秒)、B门、遥控B门
    闪光灯闪光灯类型外接外接闪光灯(热靴)支持闪光模式前帘同步,慢同步,后帘同步,防红眼,慢同步带防红眼,关闭其它闪光灯性能闪光控制:TTL:i-TTL闪光控制;i-TTL均衡补充闪光配合矩阵测光、中央重点测光、亮部重点测光一起使用,标准i-TTL补充闪光配合点测光一起使用
    闪光补偿:-3至+1EV
    尼康创意闪光系统(CLS):i-TTL闪光控制、光学无线闪光、模拟闪光、FV锁定、色彩信息交流、自动FP高速同步
    曝光控制曝光模式自动曝光,程序自动曝光(P),光圈优先(A),快门优先(S),手动曝光(M)曝光补偿±5EV(以1/3和1/2为增量)测光方式矩阵测光,中央重点测光,点测光白平衡自动(3种类型),自然光自动适应,晴天,阴天,背阴,白炽灯,荧光灯(3种类型),闪光灯,选择色温(2500K至10000K),手动预设(最多可保存6个值),均可进行微调感光度照片:ISO 100-64000
    视频:ISO 100-51200其它曝光性能动态D-lighting:自动、高+、高、标准、低、关闭
    多重曝光:叠加、平均、亮化、暗化
    拍摄性能防抖性能5轴防抖自拍功能2秒,5秒,0秒,20秒;以0.5,1,2或3秒为间隔曝光1-9 次连拍功能最高约30张/秒
    低速连拍:约1-7张/秒
    高速连拍:约7.8张秒(快门类型设置为[自动]或[M]档时);约9.4张/秒(快门类型设置为[电子前帘快门]时);约10幅/秒(静音模式开启时)
    高速连拍(延长):约14张/秒;约15张/秒(静音模式开启时)
    高速画面捕捉+(C15):约15张/秒
    高速画面捕捉+(C30):约30张/秒录音/音频系统支持
    存储参数存储卡类型SD/SDHC (兼容UHS-II)/SDXC (兼容UHS-II)文件格式NEF(RAW):14 位; 从无损压缩、高效率(高)和高效率选项中进行选择
    JPEG:兼容JPEG-Baseline,压缩比为精细(约1 : 4)、标准(约1 : 8)或基本(约1 : 16); 文件大小优先和良好品质压缩可用
    HEIF:支持精细(约1 : 4)、标准(约1 : 8)或基本(约1 : 16)
    电池性能电池类型EN-EL15c锂电池
    其它参数产品接口Type-C,HDMI,3.5mm立体声迷你针式插孔无线功能WiFi,蓝牙5.0机身马达支持产品功能VLOG拍摄麦克风/扬声器支持,内置立体声或外置麦克风其它性能快门释放模式:单张拍摄,低速连拍,高速连拍,高速连拍(延长),高速画面捕捉(支持预拍功能),自拍
    照片摄影的其他选项:暗角控制、衍射补偿、自动失真控制、皮肤柔化、人像印象平衡以及间隔摄影、对焦点位移和像素位移摄影
    视频拍摄其他选项:延时视频、电子VR减震、时间码、N-Log和HDR(HLG)视频输出、波形显示、红色REC框显示、视频录像显示缩放(50%、100%、200%和400%)、扩展快门速度(S和M模式)、RAW 视频的双格式(代理视频)录制;可通过i菜单查看视频录制信息的选项;高分辨率变焦
    双存储卡插槽:2张SD卡工作环境温度:0°C-40°C
    湿度:85%-更低(不结露)
    外观设计外形特点大屏幕外形尺寸144×103×49mm产品重量约620g(仅机身),700g(包含电池和存储卡,但不包括机身盖和配件热靴盖)
    相机附件包装清单相机 x1
    DK-29 橡胶镜头盖(相机随附) x1
    BF-N1机身盖 x1
    带端子盖的EN EL15c锂离子电池组 x1
    AN-DC26背带 x1
    两端带Type-C连接器的USB线 x1

    *本信息来源于ZOL产品库

  • The V8 Sandbox

    The V8 Sandbox

    After almost three years since the initial design document and hundreds of CLs in the meantime, the V8 Sandbox — a lightweight, in-process sandbox for V8 — has now progressed to the point where it is no longer considered an experimental security feature. Starting today, the V8 Sandbox is included in Chrome’s Vulnerability Reward Program (VRP). While there are still a number of issues to resolve before it becomes a strong security boundary, the VRP inclusion is an important step in that direction. Chrome 123 could therefore be considered to be a sort of “beta” release for the sandbox. This blog post uses this opportunity to discuss the motivation behind the sandbox, show how it prevents memory corruption in V8 from spreading within the host process, and ultimately explain why it is a necessary step towards memory safety.

    Motivation

    Memory safety remains a relevant problem: all Chrome exploits caught in the wild in the last three years (2021 – 2023) started out with a memory corruption vulnerability in a Chrome renderer process that was exploited for remote code execution (RCE). Of these, 60% were vulnerabilities in V8. However, there is a catch: V8 vulnerabilities are rarely “classic” memory corruption bugs (use-after-frees, out-of-bounds accesses, etc.) but instead subtle logic issues which can in turn be exploited to corrupt memory. As such, existing memory safety solutions are, for the most part, not applicable to V8. In particular, neither switching to a memory safe language, such as Rust, nor using current or future hardware memory safety features, such as memory tagging, can help with the security challenges faced by V8 today.

    To understand why, consider a highly simplified, hypothetical JavaScript engine vulnerability: the implementation of JSArray::fizzbuzz(), which replaces values in the array that are divisible by 3 with “fizz”, divisible by 5 with “buzz”, and divisible by both 3 and 5 with “fizzbuzz”. Below is an implementation of that function in C++. JSArray::buffer_ can be thought of as a JSValue*, that is, a pointer to an array of JavaScript values, and JSArray::length_ contains the current size of that buffer.

    for (int index = 0; index < length_; index++) {
      JSValue js_value = buffer_[index];
      int value = ToNumber(js_value).int_value();
      if (value % 15 == 0)
        buffer_[index] = JSString("fizzbuzz");
      else if (value % 5 == 0)
        buffer_[index] = JSString("buzz");
      else if (value % 3 == 0)
        buffer_[index] = JSString("fizz");
    }

    Seems simple enough? However, there’s a somewhat subtle bug here: the ToNumber conversion in line 3 can have side effects as it may invoke user-defined JavaScript callbacks. Such a callback could then shrink the array, thereby causing an out-of-bounds write afterwards. The following JavaScript code would likely cause memory corruption:

    let array = new Array(100);
    let evil = { [Symbol.toPrimitive]() { array.length = 1; return 15; } };
    array.push(evil);
    // At index 100, the @@toPrimitive callback of |evil| is invoked in
    // line 3 above, shrinking the array to length 1 and reallocating its
    // backing buffer. The subsequent write (line 5) goes out-of-bounds.
    array.fizzbuzz();

    Note that this vulnerability could occur both in hand-written runtime code (as in the example above) or in machine code generated at runtime by an optimizing just-in-time (JIT) compiler (if the function was implemented in JavaScript instead). In the former case, the programmer would conclude that an explicit bounds-check for the store operations is not necessary as that index has just been accessed. In the latter case, it would be the compiler drawing the same incorrect conclusion during one of its optimization passes (for example redundancy elimination or bounds-check elimination) because it doesn’t model the side effects of ToNumber() correctly.

    While this is an artificially simple bug (this specific bug pattern has become mostly extinct by now due to improvements in fuzzers, developer awareness, and researcher attention), it is still useful to understand why vulnerabilities in modern JavaScript engines are difficult to mitigate in a generic way. Consider the approach of using a memory safe language such as Rust, where it is the compiler’s responsibility to guarantee memory safety. In the above example, a memory safe language would likely prevent this bug in the hand-written runtime code used by the interpreter. However, it would not prevent the bug in any just-in-time compiler as the bug there would be a logic issue, not a “classic” memory corruption vulnerability. Only the code generated by the compiler would actually cause any memory corruption. Fundamentally, the issue is that memory safety cannot be guaranteed by the compiler if a compiler is directly part of the attack surface.

    Similarly, disabling the JIT compilers would also only be a partial solution: historically, roughly half of the bugs discovered and exploited in V8 affected one of its compilers while the rest were in other components such as runtime functions, the interpreter, the garbage collector, or the parser. Using a memory-safe language for these components and removing JIT compilers could work, but would significantly reduce the engine’s performance (ranging, depending on the type of workload, from 1.5–10× or more for computationally intensive tasks).

    Now consider instead popular hardware security mechanisms, in particular memory tagging. There are a number of reasons why memory tagging would similarly not be an effective solution. For example, CPU side channels, which can easily be exploited from JavaScript, could be abused to leak tag values, thereby allowing an attacker to bypass the mitigation. Furthermore, due to pointer compression, there is currently no space for the tag bits in V8’s pointers. As such, the entire heap region would have to be tagged with the same tag, making it impossible to detect inter-object corruption. As such, while memory tagging can be very effective on certain attack surfaces, it is unlikely to represent much of a hurdle for attackers in the case of JavaScript engines.

    In summary, modern JavaScript engines tend to contain complex, 2nd-order logic bugs which provide powerful exploitation primitives. These cannot be effectively protected by the same techniques used for typical memory-corruption vulnerabilities. However, nearly all vulnerabilities found and exploited in V8 today have one thing in common: the eventual memory corruption necessarily happens inside the V8 heap because the compiler and runtime (almost) exclusively operate on V8 HeapObject instances. This is where the sandbox comes into play.

    The V8 (Heap) Sandbox

    The basic idea behind the sandbox is to isolate V8’s (heap) memory such that any memory corruption there cannot “spread” to other parts of the process’ memory.

    As a motivating example for the sandbox design, consider the separation of user- and kernel space in modern operating systems. Historically, all applications and the operating system’s kernel would share the same (physical) memory address space. As such, any memory error in a user application could bring down the whole system by, for example, corrupting kernel memory. On the other hand, in a modern operating system, each userland application has its own dedicated (virtual) address space. As such, any memory error is limited to the application itself, and the rest of the system is protected. In other words, a faulty application can crash itself but not affect the rest of the system. Similarly, the V8 Sandbox attempts to isolate the untrusted JavaScript/WebAssembly code executed by V8 such that a bug in V8 does not affect the rest of the hosting process.

    In principle, the sandbox could be implemented with hardware support: similar to the userland-kernel split, V8 would execute some mode-switching instruction when entering or leaving sandboxed code, which would cause the CPU to be unable to access out-of-sandbox memory. In practice, no suitable hardware feature is available today, and the current sandbox is therefore implemented purely in software.

    The basic idea behind the software-based sandbox is to replace all data types that can access out-of-sandbox memory with “sandbox-compatible” alternatives. In particular, all pointers (both to objects on the V8 heap or elsewhere in memory) and 64-bit sizes must be removed as an attacker could corrupt them to subsequently access other memory in the process. This implies that memory regions such as the stack cannot be inside the sandbox as they must contain pointers (for example return addresses) due to hardware and OS constraints. As such, with the software-based sandbox, only the V8 heap is inside the sandbox, and the overall construction is therefore not unlike the sandboxing model used by WebAssembly.

    To understand how this works in practice, it is useful to look at the steps an exploit has to perform after corrupting memory. The goal of an RCE exploit would typically be to perform a privilege escalation attack, for example by executing shellcode or performing a return-oriented programming (ROP)-style attack. For either of these, the exploit will first want the ability to read and write arbitrary memory in the process, for example to then corrupt a function pointer or place a ROP-payload somewhere in memory and pivot to it. Given a bug that corrupts memory on the V8 heap, an attacker would therefore look for an object such as the following:

    class JSArrayBuffer: public JSObject {
      private:
        byte* buffer_;
        size_t size_;
    };

    Given this, the attacker would then either corrupt the buffer pointer or the size value to construct an arbitrary read/write primitive. This is the step that the sandbox aims to prevent. In particular, with the sandbox enabled, and assuming that the referenced buffer is located inside the sandbox, the above object would now become:

    class JSArrayBuffer: public JSObject {
      private:
        sandbox_ptr_t buffer_;
        sandbox_size_t size_;
    };

    Where sandbox_ptr_t is a 40-bit offset (in the case of a 1TB sandbox) from the base of the sandbox. Similarly, sandbox_size_t is a “sandbox-compatible” size, currently limited to 32GB.
    Alternatively, if the referenced buffer was located outside of the sandbox, the object would instead become:

    class JSArrayBuffer: public JSObject {
      private:
        external_ptr_t buffer_;
    };

    Here, an external_ptr_t references the buffer (and its size) through a pointer table indirection (not unlike the file descriptor table of a unix kernel or a WebAssembly.Table) which provides memory safety guarantees.

    In both cases, an attacker would find themselves unable to “reach out” of the sandbox into other parts of the address space. Instead, they would first need an additional vulnerability: a V8 Sandbox bypass. The following image summarizes the high-level design, and the interested reader can find more technical details about the sandbox in the design documents linked from src/sandbox/README.md.

    A high-level diagram of the sandbox design

    Solely converting pointers and sizes to a different representation is not quite sufficient in an application as complex as V8 and there are a number of other issues that need to be fixed. For example, with the introduction of the sandbox, code such as the following suddenly becomes problematic:

    std::vector<std::string> JSObject::GetPropertyNames() {
        int num_properties = TotalNumberOfProperties();
        std::vector<std::string> properties(num_properties);
    
        for (int i = 0; i < NumberOfInObjectProperties(); i++) {
            properties[i] = GetNameOfInObjectProperty(i);
        }
    
        // Deal with the other types of properties
        // ...

    This code makes the (reasonable) assumption that the number of properties stored directly in a JSObject must be less than the total number of properties of that object. However, assuming these numbers are simply stored as integers somewhere in the JSObject, an attacker could corrupt one of them to break this invariant. Subsequently, the access into the (out-of-sandbox) std::vector would go out of bounds. Adding an explicit bounds check, for example with an SBXCHECK, would fix this.

    Encouragingly, nearly all “sandbox violations” discovered so far are like this: trivial (1st order) memory corruption bugs such as use-after-frees or out-of-bounds accesses due to lack of a bounds check. Contrary to the 2nd order vulnerabilities typically found in V8, these sandbox bugs could actually be prevented or mitigated by the approaches discussed earlier. In fact, the particular bug above would already be mitigated today due to Chrome’s libc++ hardening. As such, the hope is that in the long run, the sandbox becomes a more defensible security boundary than V8 itself. While the currently available data set of sandbox bugs is very limited, the VRP integration launching today will hopefully help produce a clearer picture of the type of vulnerabilities encountered on the sandbox attack surface.

    Performance

    One major advantage of this approach is that it is fundamentally cheap: the overhead caused by the sandbox comes mostly from the pointer table indirection for external objects (costing roughly one additional memory load) and to a lesser extent from the use of offsets instead of raw pointers (costing mostly just a shift+add operation, which is very cheap). The current overhead of the sandbox is therefore only around 1% or less on typical workloads (measured using the Speedometer and JetStream benchmark suites). This allows the V8 Sandbox to be enabled by default on compatible platforms.

    Testing

    A desirable feature for any security boundary is testability: the ability to manually and automatically test that the promised security guarantees actually hold in practice. This requires a clear attacker model, a way to “emulate” an attacker, and ideally a way of automatically determining when the security boundary has failed. The V8 Sandbox fulfills all of these requirements:

    1. A clear attacker model: it is assumed that an attacker can read and write arbitrarily inside the V8 Sandbox. The goal is to prevent memory corruption outside of the sandbox.
    2. A way to emulate an attacker: V8 provides a “memory corruption API” when built with the v8_enable_memory_corruption_api = true flag. This emulates the primitives obtained from typical V8 vulnerabilities and in particular provides full read- and write access inside the sandbox.
    3. A way to detect “sandbox violations”: V8 provides a “sandbox testing” mode (enabled via either --sandbox-testing or --sandbox-fuzzing) which installs a signal handler that determines if a signal such as SIGSEGV represents a violation of the sandbox’s security guarantees.

    Ultimately, this allows the sandbox to be integrated into Chrome’s VRP program and be fuzzed by specialized fuzzers.

    Usage

    The V8 Sandbox must be enabled/disabled at build time using the v8_enable_sandbox build flag. It is (for technical reasons) not possible to enable/disable the sandbox at runtime. The V8 Sandbox requires a 64-bit system as it needs to reserve a large amount of virtual address space, currently one terabyte.

    The V8 Sandbox has already been enabled by default on 64-bit (specifically x64 and arm64) versions of Chrome on Android, ChromeOS, Linux, macOS, and Windows for roughly the last two years. Even though the sandbox was (and still is) not feature complete, this was mainly done to ensure that it does not cause stability issues and to collect real-world performance statistics. Consequently, recent V8 exploits already had to work their way past the sandbox, providing helpful early feedback on its security properties.

    Conclusion

    The V8 Sandbox is a new security mechanism designed to prevent memory corruption in V8 from impacting other memory in the process. The sandbox is motivated by the fact that current memory safety technologies are largely inapplicable to optimizing JavaScript engines. While these technologies fail to prevent memory corruption in V8 itself, they can in fact protect the V8 Sandbox attack surface. The sandbox is therefore a necessary step towards memory safety.

  • WordPress 6.8 正式版发布,优化网站性能

    WordPress 6.8 正式版发布,优化网站性能

    WordPress 6.8 完善并优化了您日常使用的工具,使您的网站运行速度更快、更安全、更易于管理。样式表现在采用结构化布局,并兼容经典主题,让您能够更好地控制全局样式。推测加载功能通过在用户导航到链接之前预加载链接来加快导航速度,bcrypt 哈希算法可自动增强密码安全性,数据库优化则可提升性能。

    文章目录

    样式书变得更加简洁,并且增加了一些新技巧

    样式书具有新的结构化布局和更清晰的标签,可以更轻松地在一个地方编辑颜色、排版(几乎所有网站样式)。

    此外,现在您可以在包含 editor-styles 或 theme.json 文件的经典主题中看到它。在“外观”>“设计”下找到“样式书”,并在编辑 CSS 或在定制器中进行更改时使用它来预览主题的演变。

    编辑器改进

    更轻松地查看数据视图中的选项,并可以从查询循环中排除置顶帖子。此外,编辑器中还有许多小改进,让您构建一切更加顺畅。

    得益于推测加载,页面加载几乎是即时的

    在 WordPress 6.8 中,页面加载速度比以往任何时候都快。当您或您的用户将鼠标悬停在链接上或点击链接时,WordPress 可能会预加载下一页,从而带来更流畅、近乎即时的体验。该系统会平衡速度和效率,您可以通过插件或自定义代码来控制其运行方式。此功能仅适用于现代浏览器——旧版浏览器会忽略它,不会产生任何影响。

    使用 bcrypt 增强密码安全性

    现在,使用 bcrypt 哈希算法,密码更难破解,这需要更强大的计算能力才能破解。这增强了整体安全性,WordPress 的其他加密改进也同样如此。您无需执行任何操作,所有内容都会自动更新。

    了解更多:WordPress 6.8 将使用 bcrypt 进行密码哈希处理

    辅助功能改进

    100 多项无障碍修复和增强功能,涵盖 WordPress 的广泛体验。此版本修复了所有捆绑主题,改进了导航菜单管理、自定义工具,并简化了标签功能。块编辑器针对块、数据视图及其整体用户体验进行了 70 多项改进。

    性能更新

    WordPress 6.8 包含一系列性能修复和增强功能,旨在提升从编辑到浏览的各项功能。除了预测加载之外,WordPress 6.8 还特别关注了块编辑器、块类型注册和查询缓存。此外,想象一下,任何交互的等待时间都不会超过 50 毫秒。在 WordPress 6.8 中,Interactivity API 朝着这一目标迈出了第一步。

    文章来自wordpress大学