Ubuntu升级后恢复WordPress

昨天手欠将Ubuntu升级到了16.04.5 LTS,结果导致博客无法访问。大致看了下,主要的原因可能是:

  • nginx替代了apache2,成为了默认的http server
  • php升级到了7.0,需要启用新的libapache2-mod-php模块,还有php-mysql等

第一次尝试

暂时先继续使用apache,并重新下载了WordPress5.0,且仍然沿用之前旧的mysql数据库表,dbName以及tableName均保持一致,但站点仍然无法正常显示。

由于之前的WordPress是从4.*升级到5.0的,很多配置文件都不太一样,所以还是需要重新初始化。

第二次尝试

mysql数据库表启用新的表名前缀,终于站点可以正常打开了,但是之前的文章全部都没有了。插件以及主题可以从之前目录的wp-content目录中拷贝出来,文章全部是存在mysql表中的,因此需要将文章从之前的表中恢复出来。

第三次尝试

尝试通过表数据的完全复制来进行恢复

delete from wp_posts;
delete from wp_options;
insert into wp_posts select * from old_posts;
insert into wp_options select * from old_options;

但是站点又是无法正常显示,尝试失败。

第四次尝试

尝试了数据订正的方式,即先随便发一篇文章,然后将表中这条记录的关键字段用旧表中的记录进行update,从而“订正”为之前的博客文章。比如:

update wp_posts a join ngtnf_posts b set a.post_date = b.post_date, a.post_date_gmt = b.post_date_gmt, a.post_content = b.post_content, a.post_title = b.post_title, a.post_modified = b.post_modified, a.post_modified_gmt = b.post_modified_gmt where a.id = 45 and b.id = 314;

但是却出现了乱码。原来Ubuntu升级后,mysql的版本也进行了升级,从而造成表的默认charset也发生了改变

之前的默认charset为latin1
CREATE TABLE `ngtnf_posts` (
  `ID` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
  `post_author` bigint(20) unsigned NOT NULL DEFAULT '0',
  `post_date` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
  `post_date_gmt` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
  `post_content` longtext NOT NULL,
  `post_title` text NOT NULL,
  `post_excerpt` text NOT NULL,
  `post_status` varchar(20) NOT NULL DEFAULT 'publish',
  `comment_status` varchar(20) NOT NULL DEFAULT 'open',
  `ping_status` varchar(20) NOT NULL DEFAULT 'open',
  `post_password` varchar(255) NOT NULL DEFAULT '',
  `post_name` varchar(200) NOT NULL DEFAULT '',
  `to_ping` text NOT NULL,
  `pinged` text NOT NULL,
  `post_modified` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
  `post_modified_gmt` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
  `post_content_filtered` longtext NOT NULL,
  `post_parent` bigint(20) unsigned NOT NULL DEFAULT '0',
  `guid` varchar(255) NOT NULL DEFAULT '',
  `menu_order` int(11) NOT NULL DEFAULT '0',
  `post_type` varchar(20) NOT NULL DEFAULT 'post',
  `post_mime_type` varchar(100) NOT NULL DEFAULT '',
  `comment_count` bigint(20) NOT NULL DEFAULT '0',
  PRIMARY KEY (`ID`),
  KEY `type_status_date` (`post_type`,`post_status`,`post_date`,`ID`),
  KEY `post_parent` (`post_parent`),
  KEY `post_author` (`post_author`),
  KEY `post_name` (`post_name`(191))
) ENGINE=InnoDB AUTO_INCREMENT=318 DEFAULT CHARSET=latin1

新的默认charset为utf8mb4
CREATE TABLE `wp_posts` (
  `ID` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
  `post_author` bigint(20) unsigned NOT NULL DEFAULT '0',
  `post_date` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
  `post_date_gmt` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
  `post_content` longtext COLLATE utf8mb4_unicode_520_ci NOT NULL,
  `post_title` text COLLATE utf8mb4_unicode_520_ci NOT NULL,
  `post_excerpt` text COLLATE utf8mb4_unicode_520_ci NOT NULL,
  `post_status` varchar(20) COLLATE utf8mb4_unicode_520_ci NOT NULL DEFAULT 'publish',
  `comment_status` varchar(20) COLLATE utf8mb4_unicode_520_ci NOT NULL DEFAULT 'open',
  `ping_status` varchar(20) COLLATE utf8mb4_unicode_520_ci NOT NULL DEFAULT 'open',
  `post_password` varchar(255) COLLATE utf8mb4_unicode_520_ci NOT NULL DEFAULT '',
  `post_name` varchar(200) COLLATE utf8mb4_unicode_520_ci NOT NULL DEFAULT '',
  `to_ping` text COLLATE utf8mb4_unicode_520_ci NOT NULL,
  `pinged` text COLLATE utf8mb4_unicode_520_ci NOT NULL,
  `post_modified` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
  `post_modified_gmt` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
  `post_content_filtered` longtext COLLATE utf8mb4_unicode_520_ci NOT NULL,
  `post_parent` bigint(20) unsigned NOT NULL DEFAULT '0',
  `guid` varchar(255) COLLATE utf8mb4_unicode_520_ci NOT NULL DEFAULT '',
  `menu_order` int(11) NOT NULL DEFAULT '0',
  `post_type` varchar(20) COLLATE utf8mb4_unicode_520_ci NOT NULL DEFAULT 'post',
  `post_mime_type` varchar(100) COLLATE utf8mb4_unicode_520_ci NOT NULL DEFAULT '',
  `comment_count` bigint(20) NOT NULL DEFAULT '0',
  PRIMARY KEY (`ID`),
  KEY `post_name` (`post_name`(191)),
  KEY `type_status_date` (`post_type`,`post_status`,`post_date`,`ID`),
  KEY `post_parent` (`post_parent`),
  KEY `post_author` (`post_author`)
) ENGINE=InnoDB AUTO_INCREMENT=84 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_520_ci 

mysql> set names utf8;
Query OK, 0 rows affected (0.00 sec)

mysql> select id,post_title from ngtnf_posts where id = 123;
+-----+---------------+
| id  | post_title    |
+-----+---------------+
| 123 | 首页        |
+-----+---------------+
1 row in set (0.00 sec)

mysql> set names latin1;
Query OK, 0 rows affected (0.00 sec)

mysql> select id,post_title from ngtnf_posts where id = 123;
+-----+------------+
| id  | post_title |
+-----+------------+
| 123 | 首页       |
+-----+------------+
1 row in set (0.00 sec)

第五次尝试

因此先要解决不同charset表之间数据复制的问题,通过mysqldump将表的记录导出,再导入到一张临时表中,即可完成latin1到utf8mb4之间的转换。

mysqldump -uroot --default-character-set=latin1 wordpress ngtnf_posts > posts.sql

将posts.sql中的表名替换为临时表名,将latin1替换为utf8mb4

mysql -uroot -Dwordpress < posts.sql 

数据导入到临时表后,即可沿用之前的数据订正的方式进行恢复了

update wp_posts a 
join ngtnf_posts_backup b 
set a.post_date = b.post_date, a.post_date_gmt = b.post_date_gmt, a.post_content = b.post_content, a.post_title = b.post_title, a.post_modified = b.post_modified, a.post_modified_gmt = b.post_modified_gmt where a.id = 45 and b.id = 314;

Let’s Encrypt升级流程

今天收到了Let’s Encrypt的邮件,大意就是以后不支持TLS-SNI-01,需要替换为其他校验方式。

You need to update your ACME client to use an alternative validation method (HTTP-01, DNS-01 or TLS-ALPN-01) before this date or your certificate renewals will break and existing certificates will start to expire.

参考官方的教程,开始进行升级。首先将certbot升级到0.28.0

ubuntu@ip-172-31-12-237:~$ certbot --version
 certbot 0.19.0

然后按步骤升级,dry-run通过

root@ip-172-31-12-237:~# sudo certbot renew --dry-run
Saving debug log to /var/log/letsencrypt/letsencrypt.log

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Processing /etc/letsencrypt/renewal/njujiang.tech.conf
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Cert not due for renewal, but simulating renewal for dry run
Plugins selected: Authenticator apache, Installer apache
Renewing an existing certificate
Performing the following challenges:
http-01 challenge for njujiang.tech
Waiting for verification...
Cleaning up challenges

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
new certificate deployed with reload of apache server; fullchain is
/etc/letsencrypt/live/njujiang.tech/fullchain.pem
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
** DRY RUN: simulating 'certbot renew' close to cert expiry
**          (The test certificates below have not been saved.)

Congratulations, all renewals succeeded. The following certs have been renewed:
  /etc/letsencrypt/live/njujiang.tech/fullchain.pem (success)
** DRY RUN: simulating 'certbot renew' close to cert expiry
**          (The test certificates above have not been saved.)
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

然后强制更新

root@ip-172-31-12-237:~# sudo certbot renew
Saving debug log to /var/log/letsencrypt/letsencrypt.log

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Processing /etc/letsencrypt/renewal/njujiang.tech.conf
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Cert not yet due for renewal

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

The following certs are not due for renewal yet:
  /etc/letsencrypt/live/njujiang.tech/fullchain.pem expires on 2019-03-30 (skipped)
No renewals were attempted.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
root@ip-172-31-12-237:~# 
root@ip-172-31-12-237:~# 
root@ip-172-31-12-237:~# certbot renew --force-renewal
Saving debug log to /var/log/letsencrypt/letsencrypt.log

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Processing /etc/letsencrypt/renewal/njujiang.tech.conf
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Plugins selected: Authenticator apache, Installer apache
Renewing an existing certificate
Performing the following challenges:
http-01 challenge for njujiang.tech
Waiting for verification...
Cleaning up challenges

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
new certificate deployed with reload of apache server; fullchain is
/etc/letsencrypt/live/njujiang.tech/fullchain.pem
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Congratulations, all renewals succeeded. The following certs have been renewed:
  /etc/letsencrypt/live/njujiang.tech/fullchain.pem (success)
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

JobHistory中的resource usage分析

背景

需要统计hive中每个sql的counter信息,MapReduce的框架中一共有以下counter信息

  // Counters used by Task subclasses
  public static enum Counter { 
    MAP_INPUT_RECORDS, 
    MAP_OUTPUT_RECORDS,
    MAP_SKIPPED_RECORDS,
    MAP_INPUT_BYTES, 
    MAP_OUTPUT_BYTES,
    COMBINE_INPUT_RECORDS,
    COMBINE_OUTPUT_RECORDS,
    REDUCE_INPUT_GROUPS,
    REDUCE_SHUFFLE_BYTES,
    REDUCE_INPUT_RECORDS,
    REDUCE_OUTPUT_RECORDS,
    REDUCE_SKIPPED_GROUPS,
    REDUCE_SKIPPED_RECORDS,
    SPILLED_RECORDS,
    SPLIT_RAW_BYTES,
    CPU_MILLISECONDS,
    PHYSICAL_MEMORY_BYTES,
    VIRTUAL_MEMORY_BYTES,
    COMMITTED_HEAP_BYTES
  }

可以看出,counter信息主要有两类:

  • 一类是MapReduce框架中IO方面的一些统计,比如记录数、字节数等等
  • 另一类是运行时宿主机的性能指标,比如CPU时间、内存使用等等

Counter信息获取

  • 使用自带的hadoop rumen项目对job history进行解析,具体命令如下:
 hadoop jar \
  /opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/jars/hadoop-rumen-2.6.0-cdh5.11.2.jar \
  org.apache.hadoop.tools.rumen.TraceBuilder \
  file:///tmp/job-trace.json \
  file:///tmp/job-topology.json \
  hdfs:///user/history/done/2018/06/06/000000

  • 这样在生成的job-trace.json中就可以查看当天的所有job的具体信息
 {
  "jobID" : "job_1528373726326_0204",
  "queue" : "default",
  "user" : "hive",
  "jobName" : "INSERT OVERWRITE TABL...st_day('2018-05-16')(Stage-1)",
  "submitTime" : 1528781559636,
  "finishTime" : 1528781571551,
  "mapTasks" : [ {
    "startTime" : 1528781565131,
    "taskID" : "task_1528373726326_0204_m_000000",
    "taskType" : "MAP",
    "finishTime" : 1528781571514,
    "attempts" : [ {
      "startTime" : 1528781567259,
      "finishTime" : 1528781571514,
      "attemptID" : "attempt_1528373726326_0204_m_000000_0",
      "clockSplits" : [ 4201, 5, 4, 5, 4, 5, 5, 4, 5, 4, 5, 5 ],
      "cpuUsages" : [ 170, 171, 171, 171, 171, 171, 170, 171, 171, 171, 171, 171 ],
      "vmemKbytes" : [ 116591, 349773, 582955, 816136, 1049319, 1282500, 1515683, 1748864, 1982047, 2215229, 2448410, 2681593 ],
      "physMemKbytes" : [ 17301, 51903, 86505, 121107, 155710, 190312, 224915, 259516, 294119, 328722, 363323, 397926 ],
      "shuffleFinished" : -1,
      "sortFinished" : -1,
      "hdfsBytesRead" : 7795,
      "hdfsBytesWritten" : 2,
      "fileBytesRead" : 0,
      "fileBytesWritten" : 255682,
      "mapInputRecords" : 0,
      "mapOutputBytes" : -1,
      "mapOutputRecords" : 0,
      "combineInputRecords" : -1,
      "reduceInputGroups" : -1,
      "reduceInputRecords" : -1,
      "reduceShuffleBytes" : -1,
      "reduceOutputRecords" : -1,
      "spilledRecords" : 0,
      "mapInputBytes" : -1,
      "resourceUsageMetrics" : {
        "heapUsage" : 623378432,
        "virtualMemoryUsage" : 2865340416,
        "physicalMemoryUsage" : 425193472,
        "cumulativeCpuUsage" : 2050
      },
……
……

Counter信息解读

Resource Usage Metrics

一般来说,resourceUsageMetrics中的指标就可以体现出某个task attempt的资源使用情况

  "resourceUsageMetrics" : {
    "heapUsage" : 623378432,
    "virtualMemoryUsage" : 2865340416,
    "physicalMemoryUsage" : 425193472,
    "cumulativeCpuUsage" : 2050
  },

具体的更新逻辑在Task类中

  /**
   * Update resource information counters
   */
  void updateResourceCounters() {
    // Update generic resource counters
    updateHeapUsageCounter();

    // Updating resources specified in ResourceCalculatorProcessTree
    if (pTree == null) {
      return;
    }
    pTree.updateProcessTree();
    long cpuTime = pTree.getCumulativeCpuTime();
    long pMem = pTree.getCumulativeRssmem();
    long vMem = pTree.getCumulativeVmem();
    // Remove the CPU time consumed previously by JVM reuse
    cpuTime -= initCpuCumulativeTime;
    counters.findCounter(TaskCounter.CPU_MILLISECONDS).setValue(cpuTime);
    counters.findCounter(TaskCounter.PHYSICAL_MEMORY_BYTES).setValue(pMem);
    counters.findCounter(TaskCounter.VIRTUAL_MEMORY_BYTES).setValue(vMem);
  }

Progress Split Counter

但是job-trace.json中有一组counter信息很是奇怪

 "clockSplits" : [ 4201, 5, 4, 5, 4, 5, 5, 4, 5, 4, 5, 5 ],
 "cpuUsages" : [ 170, 171, 171, 171, 171, 171, 170, 171, 171, 171, 171, 171 ],
 "vmemKbytes" : [ 116591, 349773, 582955, 816136, 1049319, 1282500, 1515683, 1748864, 1982047, 2215229, 2448410, 2681593 ],
 "physMemKbytes" : [ 17301, 51903, 86505, 121107, 155710, 190312, 224915, 259516, 294119, 328722, 363323, 397926 ]

表面看是四个size为12的数组,这些其实是在task执行的过程中,每隔一段时间就记录下当前时刻的性能指标。

其中核心的类就是ProgressSplitsBlock

ProgressSplitsBlock

  ProgressSplitsBlock(int numberSplits) {
    progressWallclockTime
      = new CumulativePeriodicStats(numberSplits);
    progressCPUTime
      = new CumulativePeriodicStats(numberSplits);
    progressVirtualMemoryKbytes
      = new StatePeriodicStats(numberSplits);
    progressPhysicalMemoryKbytes
      = new StatePeriodicStats(numberSplits);
  }

ProgressSplitsBlock中包含了四组统计信息,分别是距离任务启动的时间、CPU时间、虚拟内存占用、物理内存占用。其中CumulativePeriodicStats和StatePeriodicStats稍有区别。

  • CumulativePeriodicStats是可以累加的指标,数组中的值相加即总计的值。

An easy-to-understand example of this kind of quantity would
be a distance traveled. It makes sense to consider that
portion of the total travel that can be apportioned to each
bucket.

170+171+171+171+171+171+170+171+171+171+171+171 = 2050

  • StatePeriodicStats是一段时间内的平均值,数组中的值其实是一段时间的中位数

An easy-to-understand example of this kind of quantity would
be a temperature. It makes sense to consider the mean
temperature over a progress range.

创建ProgressSplitsBlock并更新的地方是TaskInProgress

TaskInProgress

  • 创建ProgressSplitsBlock
  synchronized ProgressSplitsBlock getSplits(TaskAttemptID statusAttemptID) {
    ProgressSplitsBlock result = splitsBlocks.get(statusAttemptID);

    if (result == null) {
      result
        = new ProgressSplitsBlock
            (conf.getInt(JTConfig.JT_JOBHISTORY_TASKPROGRESS_NUMBER_SPLITS,
                         ProgressSplitsBlock.DEFAULT_NUMBER_PROGRESS_SPLITS));
      splitsBlocks.put(statusAttemptID, result);
    }

    return result;
  }

DEFAULT_NUMBER_PROGRESS_SPLITS为12,所以json中我们看到的数组的size为12

  • 更新ProgressSplitsBlock
      Counters.Counter cpuCounter = counters.findCounter(CPU_COUNTER_KEY);
      if (cpuCounter != null &amp;&amp; cpuCounter.getCounter() <= Integer.MAX_VALUE) {
        splitsBlock.progressCPUTime.extend
          (newProgress, (int)(cpuCounter.getCounter()));
      }

extend方法中有一处特殊处理,就是下一次更新的时候,任务的progress的跨度太大,比如从30%直接跳到了90%,则中间的结果需要填充。所以这时,中间的结果并不是实际测量出来的值,而是平滑计算后的结果。

Enable @Transactional on private methods

Problem

在spring中,如果为某个method增加@Transactional注解,则该方法内的数据库操作都处于事务中。

但是,如果方法中有一段业务逻辑在事务之外的话,比如

public void method(){
    // 数据库操作,开启事务
    Result result = handleDbOperation();
    // 业务逻辑处理,不需要事务
    handleBizLogic(result);
}

@Transactional
public void handleDbOperation(){
……
}

则这里的注解其实是不生效的

Why?

Public visibility

spring doc中有相关的说明

When using proxies, you should apply the @Transactional annotation only to methods with public visibility. If you do annotate protected, private or package-visible methods with the @Transactional annotation, no error is raised, but the annotated method does not exhibit the configured transactional settings. Consider the use of AspectJ (see below) if you need to annotate non-public methods.

External call

in proxy mode (which is the default), only external method calls coming in through the proxy are intercepted. This means that self-invocation, in effect, a method within the target object calling another method of the target object, will not lead to an actual transaction at runtime even if the invoked method is marked with @Transactional

Workaround

Another bean

新建另一个service,将事务相关的代码全部都放在这个service中,从而将内部调用转变为外部调用

@Autowird
private DbService dbservice;

public void method(){
    // 数据库操作,开启事务
    Result result = dbservice.handleDbOperation();
    // 业务逻辑处理,不需要事务
    handleBizLogic(result);
}

TransactionUtil

同样是转为外部调用,但是比较巧妙的是,这里利用了lambda表达式作为传参,从而不需要将代码逻辑进行迁移。

@Autowird
private TransactionHelper helper;

public void method(){
    // 数据库操作,开启事务
    Result result = helper.withTransaction(() -> handleDbOperation());
    // 业务逻辑处理,不需要事务
    handleBizLogic(result);
}

// TransactionHelper.java :
@Service
public class TransactionHelper {
    @Transactional
    public <T> T withTransaction(Supplier<T> supplier) {
        return supplier.get();
    }
    @Transactional
    public void withTransaction(Runnable runnable) {
        runnable.run();
    }
}

地道英语发音技巧

元音和辅音

简单直观的理解,就是元音是由声带震动发出的声音,而辅音是由空气摩擦发出的声音,它们共同组成了音节,可以参考果壳网上的解释

在英语中,元音包括(A E I O U)这几个字母,其他的字母都可以视为辅音,但是有两个特例,在下述情况中可以作为元音:
* W (COW、HOW)
* Y (SKY、FLY)

发音技巧

1. 连读

将前一单词的结尾音与后一单词的开始音连在一起读,有如下几种情况

1.1 辅音和元音的连读

  • Not at all [nɔ tɔ:l]
  • Call it a day [kɔ: li dei]

1.2 元音和元音的连读

  • How are you [ha ju:]
  • Go away [ɡə ˈwei]

1.3 辅音和辅音的连读

  • Would you / could you (发音类似[dʒ],不是[d])
  • About you / Last year (发音类似[tʃ],不是[t])
  • Six years old (发音类似[ʃ])

2. 失爆

当词尾出现爆破音(p/b/t/d/k/g),且和下一单词的开头音无法连读的时候,从而可以省略掉爆破音,比如:

  • Sit down
  • blackboard
  • Bed and breakfast
  • Have a good time

爆破音只有出现在词尾,才能被失去。连读的时候,爆破音需要保留,比如:

  • Look after. 连读
  • Out there. 失爆

3. 弱读-把句子读出节奏感

3.1 英语中通过弱读虚词来体现节奏感。

比如下面四个句子中,重读的单词相同,因此可以用相同的速度、相同的节拍读完。
* Cows eat grass
* The cows eat the grass
* The cows are eating the grass
* The cows have been eating the grass

3.2 英语是重音计时的语言

在英语中,读整句话的时间取决于这个句子中的重读音节的数量,而中文则是取决于所有音节的数量

3.3 弱读虚词,重读实词

  • 虚词
    1. 冠词 a an the
    2. 介词 in on by with at for
    3. 代词 I you they that it
    4. 连词 and but or not so
    5. 助动词 be have has do does shall did will should
    6. 情态动词 may might must
  • 实词
    1. 动词
    2. 名词
    3. 形容词
    4. 副词 now then often always here there everywhere
    5. 否定式 couldn’t won’t didn’t

3.4 弱读适用于不带任何感情色彩的语气

在不同的情境下,可能有不同的语气,有不同的重读单词

4. 弱读-弱读式

4.1 元音不饱满:所有的元音弱读为/ə/

  • I have a/eɪ -> ə/ pen, I have an apple
  • Please come home as/æz -> əz/ soon as possible
  • That’s all for/fɔr -> fər/ today
  • Anything but/bʌt -> bət/ that
  • Ladies and/ænd -> ənd/ gentlemen
  • I can/kæn -> kən/ do it(否定式需要强读)

4.2 /h/音在弱读中省略

  • I know he doesn’t like him/him -> im/
  • Send her/hər -> ər/ away
  • She has/hæz -> əz/ come

5. 英美发音区别

5.1 R的发音(美式发音要卷舌,英式发音不卷舌)

  • bird:美[bɜrd] 英[bɜ:d]
  • park:美[pɑrk] 英[pɑ:k]

5.2 A的发音(美式/æ/ 英式/ɑ:/)

  • dance:美[dæns] 英[dɑ:ns]
  • can’t:美[kænt] 英[kɑ:nt]

5.3 O的发音(美式/ɑ/ 英式/ɒ/)

  • hot:美[hɑt] 英[hɒt]
  • honest:美[ˈɑnɪst] 英[ˈɒnɪst]

5.4 ary的发音

  • necessary:美[ˈnesəseri] 英[ˈnesəsəri]
  • primary:美[ˈpraɪmeri] 英[ˈpraɪməri]

5.5 总结

  • 美式发音,有卷舌,且嘴张的比较大

MyBatis的localCache问题及解决方案

背景

在单元测试中,我们需要验证不同的场景下系统功能是否正常。而构造不同的测试场景,就需要对db中的数据做相应的订正,同时通过事务的回滚,保证单元测试结束后,db中的数据回归到初始状态

问题描述

单元测试类上已经申明了@Transactional注解,在下面的这个测试场景代码中

result1 = queryResult();
jdbcTemplate.update("***");
jdbcTemplate.update("***");
result2 = queryResult();

result1和result2的结果完全一致,不符合预期

原因探讨

mybatis默认开启了localCache功能,默认的scope是session级别。那么在同一个session中,相同的Query重复执行的时候,直接从缓存中读取结果。

  @SuppressWarnings("unchecked")
  @Override
  public <E> List<E> query(MappedStatement ms, Object parameter, RowBounds rowBounds, ResultHandler resultHandler, CacheKey key, BoundSql boundSql) throws SQLException {
    ……
    List<E> list;
    try {
      queryStack++;
      list = resultHandler == null ? (List<E>) localCache.getObject(key) : null;
      if (list != null) {
        handleLocallyCachedOutputParameters(ms, key, parameter, boundSql);
      } else {
        list = queryFromDatabase(ms, parameter, rowBounds, resultHandler, key, boundSql);
      }
    } finally {
      queryStack--;
    }
    ……
    return list;
  }

解决方案

只需要在select element中加上flushCache=”true”,则这个statement执行结束后,会将localCache清空,下一次查询的时候就会直接查询数据库

    <select id="selectAll" resultMap="BaseResultMap" flushCache="true">
        select
        <include refid="Base_Column_List"/>
        from Table
    </select>

如果配置了这个选项,则在执行之前会将缓存清空,保证从数据库重新查询一次

  @Override
  public <E> List<E> query(MappedStatement ms, Object parameterObject, RowBounds rowBounds, ResultHandler resultHandler, CacheKey key, BoundSql boundSql)
      throws SQLException {
    Cache cache = ms.getCache();
    if (cache != null) {
      flushCacheIfRequired(ms);
      ……
  }

  private void flushCacheIfRequired(MappedStatement ms) {
    Cache cache = ms.getCache();
    if (cache != null &amp;&amp; ms.isFlushCacheRequired()) {      
      tcm.clear(cache);
    }
  }

不过需要注意的是,mybatis默认的作用域是session级别,也就是说清空以后,会影响这个session内的所有sql。
可以在mybatis的配置中,配置cache的作用域是session级别还是sql级别

<setting name="localCacheScope" value="SESSION"/>
<setting name="localCacheScope" value="STATEMENT"/>

免费启用https

浏览到coolshell上的一篇博客,可以免费启用https,赶紧按步骤体验了下。

总体流程比较顺利,也顺利创建了crontab任务

ubuntu@:~$ cat /etc/cron.d/certbot
# /etc/cron.d/certbot: crontab entries for the certbot package
#
# Upstream recommends attempting renewal twice a day
#
# Eventually, this will be an opportunity to validate certificates
# haven't been revoked, etc.  Renewal will only occur if expiration
# is within 30 days.
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

0 */12 * * * root test -x /usr/bin/certbot -a \! -d /run/systemd/system &amp;&amp; perl -e 'sleep int(rand(3600))' &amp;&amp; certbot -q renew

但是http自动跳转到https并没有起作用,后来在网上找到了类似的问题,用下面的配置启用了http强制跳转

ServerName njujiang.tech
ServerAdmin njujiang@163.com
DocumentRoot /var/www/html/wordpress
Redirect / https://njujiang.tech/