historial posts moved

This commit is contained in:
d0zingcat
2019-11-21 01:42:01 +08:00
parent 8c3454545c
commit 1ea6397d47
21 changed files with 2784 additions and 38 deletions

View File

@@ -0,0 +1,11 @@
---
title: "Sign in With Apple Erlang Backend Implementation"
date: 2019-11-12T19:35:45+08:00
category: ["Erlang"]
draft: true
---
> 最近准备开始交接之前的一个Python的项目准备正式开始接手SDK项目。但是把代码下载下来一看发现公司的非物质文化遗产还是没那么快可以被抛弃的兜兜转转还是逃不过写Erlang的命。于是默默的拿出了之前公司大哥给的Erlang程序设计现学现卖边学边写我的需求。

View File

@@ -0,0 +1,4 @@
---
title: 'Using Vps With Tor'
date: 2019-10-29T16:30:57+08:00
---

View File

@@ -0,0 +1,48 @@
---
title: "Vim the Best Ide"
date: 2019-11-03T11:35:11+08:00
lastmod: 2019-11-03T11:35:11+08:00
draft: true
keywords: []
description: ""
tags: []
categories: []
author: ""
# You can also close(false) or open(true) something for this content.
# P.S. comment can only be closed
comment: false
toc: false
autoCollapseToc: false
postMetaInFooter: false
hiddenFromHomePage: false
# You can also define another contentCopyright. e.g. contentCopyright: "This is another copyright."
contentCopyright: false
reward: false
mathjax: false
mathjaxEnableSingleDollar: false
mathjaxEnableAutoNumber: false
# You unlisted posts you might want not want the header or footer to show
hideHeaderAndFooter: false
# You can enable or disable out-of-date content warning for individual post.
# Comment this out to use the global config.
#enableOutdatedInfoWarning: false
flowchartDiagrams:
enable: false
options: ""
sequenceDiagrams:
enable: false
options: ""
---
<!--more-->
参考:
[Vim Splits - Move Faster and More Naturally](https://thoughtbot.com/blog/vim-splits-move-faster-and-more-naturally)

View File

@@ -0,0 +1,23 @@
---
title: "20180819本周总结"
date: 2018-07-03T13:26:50+08:00
draft: false
---
> 这周没有看太多书,值得留下的东西寥寥无几。
GDB image not found error
最近因为觉得golang的editor LiteIDE已经很好用了所以就尝试着用了一把。结果在debug的时候过需要提前装好GDB我直接使用的brew进行安装的也就是`brew install gdb`),但是当直接在终端中键入的时候会报错:
<!--more-->
`dyld: Library not loaded: /usr/local/opt/mpfr/lib/libmpfr.6.dylib`以及`image not found`
的类似错误。但是Google并没有告诉我答案只是找到一个类似的相近的问题和回答[dyld: Library not loaded: /usr/local/lib/libmpfr.4.dylib](https://stackoverflow.com/questions/49457773/dyld-library-not-loaded-usr-local-lib-libmpfr-4-dylib)。看到了人家提到了brew于是继续顺藤摸瓜搜索了dyld查到了这个[OS X / MPFR](http://labs.beatcraft.com/en/index.php?OS%20X%20%2F%20MPFR),看到`brew install mpfr`便尝试了下神奇地发现居然问题解决了。GDB又可以用了
当然额外可以提一句的是在尝试对golang 程序进行debug的过程中其实GDB并不是那么强大比如并不支持goroutine也发现了个替代品[delve](https://github.com/derekparker/delve) 有兴趣的可以自行研究一下子~

798
source/_posts/2018-10-18.md Normal file
View File

@@ -0,0 +1,798 @@
---
title: "2018-10-18"
date: 2018-11-03T18:59:36+08:00
draft: false
---
> 本周因为公司需要搞个技术小组分享形式的内部会议所以很匆忙地赶了一些粗制滥造的算法内容出来。主要有最简单的bfs、dfs、union-find、popcount等算法。以下为内容
# Graph
*intro and definition*
<!--more-->
1. subgraph
2. connectivity
3. trees and forest
3.1 simple unbalanced tree sort
# BFS
1. Definition: A BFS traversal of a graph returns the nodes of the graph level by level.
2. Application form: by queue
A queue is a line. If youre the first to get in a bus line, youre the first to get on the bus. First In, First Out.
## leetcode
```
515. Find Largest Value in Each Tree Row
You need to find the largest value in each row of a binary tree.
Example:
Input:
1
/ \
3 2
/ \ \
5 3 9
Output: [1, 3, 9]
```
*solution*
```golang
type TreeNode struct {
Val int
Left *TreeNode
Right *TreeNode
}
func bfs(root *TreeNode) []int {
if root == nil {
return nil
}
var queue []*TreeNode
var res []int
queue = append(queue, root)
pos := 0
for pos < len(queue) {
max := ^int(^uint(0) >> 1)
length := len(queue)
for i := pos; i < length; i++ {
t := queue[i]
if t.Val > max {
max = t.Val
}
if queue[i].Left != nil {
queue = append(queue, queue[i].Left)
}
if queue[i].Right != nil {
queue = append(queue, queue[i].Right)
}
pos++
}
res = append(res, max)
}
return res
}
func main() {
root := &TreeNode{Val: 1}
root.Left = &TreeNode{Val: 3}
root.Right = &TreeNode{Val: 2}
root.Left.Left = &TreeNode{Val: 5}
root.Left.Right = &TreeNode{Val: 3}
root.Right.Right = &TreeNode{Val: 9}
fmt.Println(bfs(root))
fmt.Println(bfs(nil))
}
```
*Attention*
- empty data set may cause exception
- list length should not change during process of fetching one element, should use offset to start with
# DFS
1. definition
2. usage
3. complexity
4. further usage (1. path between two vertices 2. find a cycle in the graph)
## leetcode
```
547. Friend Circles
There are N students in a class. Some of them are friends, while some are not. Their friendship is transitive in nature. For example, if A is a direct friend of B, and B is a direct friend of C, then A is an indirect friend of C. And we defined a friend circle is a group of students who are direct or indirect friends.
Given a N*N matrix M representing the friend relationship between students in the class. If M[i][j] = 1, then the ith and jth students are direct friends with each other, otherwise not. And you have to output the total number of friend circles among all the students.
Example 1:
Input:
[[1,1,0],
[1,1,0],
[0,0,1]]
Output: 2
Explanation:The 0th and 1st students are direct friends, so they are in a friend circle.
The 2nd student himself is in a friend circle. So return 2.
Example 2:
Input:
[[1,1,0],
[1,1,1],
[0,1,1]]
Output: 1
Explanation:The 0th and 1st students are direct friends, the 1st and 2nd students are direct friends,
so the 0th and 2nd students are indirect friends. All of them are in the same friend circle, so return 1.
Note:
N is in range [1,200].
M[i][i] = 1 for all students.
If M[i][j] = 1, then M[j][i] = 1.
```
*DFS solution*
```golang
func dfs(M [][]int, i int, visit []bool) {
for j := range M {
if M[i][j] == 1 && !visit[j] {
visit[j] = true
dfs(M, j, visit)
}
}
}
func findCircleNum(M [][]int) int {
visit := make([]bool, len(M))
ans := 0
for i := range M {
if !visit[i] {
ans++
dfs(M, i, visit)
}
}
return ans
}
```
*Union find solution*
```golang
var size int
var pre []int
var rank []int
var count int
func findPre(x int) int {
if pre[x] == x {
return x
}
return findPre(x)
}
func compFindPre(x int) int {
if pre[x] == x {
return x
}
pre[x] = compFindPre(pre[x])
return pre[x]
}
func union(x, y int) {
rootX := compFindPre(x)
rootY := compFindPre(y)
if rootX == rootY {
return
}
if rank[rootX] < rank[rootY] {
pre[rootX] = rootY
} else {
pre[rootY] = rootX
if rank[rootX] == rank[rootY] {
rank[rootX]++
}
}
count--
}
func FindCircleNum(M [][]int) int {
size = len(M)
count = size
pre = make([]int, size)
rank = make([]int, size)
for i := 0; i < size; i++ {
pre[i] = i
rank[i] = 1
}
for i := 0; i < size; i++ {
for j := i + 1; j < size; j++ {
if M[i][j] == 1 {
union(i, j)
}
}
}
return count
}
```
# Union find
> In computer science, a disjoint-set data structure (also called a unionfind data structure or mergefind set) is a data structure that tracks a set of elements partitioned into a number of disjoint (non-overlapping) subsets. It provides near-constant-time operations (bounded by the inverse Ackermann function) to add new sets, to merge existing sets, and to determine whether elements are in the same set. In addition to many other uses (see the Applications section), disjoint-sets play a key role in Kruskal's algorithm for finding the minimum spanning tree of a graph.
*make connections*
1. usage seriano
2. find
3. union
4. find and compression
# Greatest Common Divisor Algorithm
> In mathematics, the Euclidean algorithm[a], or Euclid's algorithm, is an efficient method for computing the greatest common divisor (GCD) of two numbers, the largest number that divides both of them without leaving a remainder. It is named after the ancient Greek mathematician Euclid, who first described it in his Elements (c. 300 BC). It is an example of an algorithm, a step-by-step procedure for performing a calculation according to well-defined rules, and is one of the oldest algorithms in common use. It can be used to reduce fractions to their simplest form, and is a part of many other number-theoretic and cryptographic calculations.
> The Euclidean algorithm is based on the principle that the greatest common divisor of two numbers does not change if the larger number is replaced by its difference with the smaller number. For example, 21 is the GCD of 252 and 105 (as 252 = 21 × 12 and 105 = 21 × 5), and the same number 21 is also the GCD of 105 and 252 105 = 147. Since this replacement reduces the larger of the two numbers, repeating this process gives successively smaller pairs of numbers until the two numbers become equal. When that occurs, they are the GCD of the original two numbers. By reversing the steps, the GCD can be expressed as a sum of the two original numbers each multiplied by a positive or negative integer, e.g., 21 = 5 × 105 + (2) × 252. The fact that the GCD can always be expressed in this way is known as Bézout's identity.
*Binary Greatest Common Divisor Algoroithm*
> The binary GCD algorithm, also known as Stein's algorithm, is an algorithm that computes the greatest common divisor of two nonnegative integers. Stein's algorithm uses simpler arithmetic operations than the conventional Euclidean algorithm; it replaces division with arithmetic shifts, comparisons, and subtraction. Although the algorithm was first published by the Israeli physicist and programmer Josef Stein in 1967,[1] it may have been known in 1st-century China.
```golang
// Stein's Algorithm
func BinaryGCD(u, v int64) int64 {
if u == v {
return u
}
if v == 0 {
return u
}
if u == 0 {
return v
}
if ^u&1 == 1 { // u is even
if v&1 == 1 { // v is odd
return BinaryGCD(u>>1, v)
} else {
return BinaryGCD(u>>1, v>>1) << 1
}
}
// u is odd
if ^v&1 == 1 { // v is even
return BinaryGCD(u, v>>1)
}
// v is odd
if u > v {
return BinaryGCD((u-v)>>1, v)
} else {
return BinaryGCD((v-u)>>1, u)
}
}
func BGCDRecurrence(u, v int) int {
if u == v {
return u
}
if u == 0 {
return v
}
if v == 0 {
return u
}
var shift uint
for shift = 0; (u|v)&1 == 0; shift++ {
u >>= 1
v >>= 1
}
for u&1 == 0 {
u >>= 1
}
for {
for v&1 == 0 {
v >>= 1
}
if u > v {
u, v = v, u
}
v = (v - u) >> 1
if v == 0 {
break
}
}
return u << shift
}
```
*Common GCD Algorithm*
```golang
func CommonGCD(x, y int64) int64 {
if x < y {
x, y = y, x
}
for x%y != 0 {
x, y = y, x%y
}
return y
}
```
*Benchmark*
```golang
// small numbers
// Euclidean algorithm
func BenchmarkCommonGCD(b *testing.B) {
for i := 0; i < b.N; i++ {
CommonGCD(18, 12)
}
}
// Stein's Algorithm
func BenchmarkBinaryGCD(b *testing.B) {
for i := 0; i < b.N; i++ {
BinaryGCD(18, 12)
}
}
```
1. Euclidean
```
goos: windows
goarch: amd64
pkg: github.com/d0zingcat/labs/gcd/common
BenchmarkCommonGCD-4 100000000 20.0 ns/op
PASS
ok github.com/d0zingcat/labs/gcd/common 2.104s
```
2. Binary
```
goos: windows
goarch: amd64
pkg: github.com/d0zingcat/labs/gcd/binary
BenchmarkBinaryGCD-4 100000000 10.5 ns/op
PASS
ok github.com/d0zingcat/labs/gcd/binary 1.165s
```
**ALMOST double its effience!!!**
```golang
// int64 big numbers
// Euclidean algorithm
func BenchmarkCommonGCD(b *testing.B) {
for i := 0; i < b.N; i++ {
r := rand.New(rand.NewSource(time.Now().UnixNano()))
CommonGCD(r.Int63(), r.Int63())
}
}
// Stein's Algorithm
func BenchmarkBinaryGCD(b *testing.B) {
for i := 0; i < b.N; i++ {
r := rand.New(rand.NewSource(time.Now().UnixNano()))
BinaryGCD(r.Int63(), r.Int63())
}
}
```
1. Euclidean
```
goos: windows
goarch: amd64
pkg: github.com/d0zingcat/labs/gcd/common
BenchmarkCommonGCD-4 200000 10610 ns/op
PASS
ok github.com/d0zingcat/labs/gcd/common 2.388s
```
2. Binary
```
goos: windows
goarch: amd64
pkg: github.com/d0zingcat/labs/gcd/binary
BenchmarkBinaryGCD-4 200000 9885 ns/op
PASS
ok github.com/d0zingcat/labs/gcd/binary 2.192s
```
**Each op differs by almost 0.5 ms! still quite fast in practice**
# Popcount Algorithm(Hamming Weight)
> Question: how to count all the 1s in a 0-1 binary string of a number
1. arithmatical op: %2 == 1; n /= 2
2. bitwise op
2.1 iterated popcount
2.2 sparse popcount
2.3 dense popcount
2.4 lookup popcount
2.5 parallel popcount
2.6 to be continued.... cannot understand any more
*code*
```golang
var pc [256]byte = [...]byte{
0, 1, 1, 2, 1, 2, 2, 3, 1, 2, 2, 3, 2, 3, 3, 4, 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5,
1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7,
1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7,
2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7,
3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7, 4, 5, 5, 6, 5, 6, 6, 7, 5, 6, 6, 7, 6, 7, 7, 8,
}
//func init() {
// for i := range pc {
// pc[i] = pc[i/2] + byte(i&1)
// }
//}
func LookupCount(x uint64) int {
// x != 0 && x & (x-1) == 0 is power to 2
// equavilent to x & 0xff
return int(pc[byte(x>>(0*8))] +
pc[byte(x>>(1*8))] +
pc[byte(x>>(2*8))] +
pc[byte(x>>(3*8))] +
pc[byte(x>>(4*8))] +
pc[byte(x>>(5*8))] +
pc[byte(x>>(6*8))] +
pc[byte(x>>(7*8))])
}
func StupidCount(n int) int {
count := 0
for n != 0 {
if n%2 == 1 {
count++
}
n /= 2
}
return count
}
// log2N
func NaiveCount(n int) int {
count := 0
for n != 0 {
count += (n & 0x1)
n >>= 1
}
return count
}
func SparseCount(n int) int {
count := 0
for n != 0 {
count++
// Zero the lowest-order one-bit
n &= n - 1
}
return count
}
func DenseCount(n int64) int {
count := 64
n = ^n
for n != 0 {
count--
n &= (n - 1)
}
return count
}
func ParallelCount(n int64) int64 {
const (
//uint64_t is an unsigned 64-bit integer variable type (defined in C99 version of C language)
m1 = 0x5555555555555555 //binary: 0101...
m2 = 0x3333333333333333 //binary: 00110011..
m4 = 0x0f0f0f0f0f0f0f0f //binary: 4 zeros, 4 ones ...
m8 = 0x00ff00ff00ff00ff //binary: 8 zeros, 8 ones ...
m16 = 0x0000ffff0000ffff //binary: 16 zeros, 16 ones ...
m32 = 0x00000000ffffffff //binary: 32 zeros, 32 ones
h01 = 0x0101010101010101 //the sum of 256 to the power of 0,1,2,3...
)
// or to be optimized
const (
mm1 = 0xffffffffffffffff / (0x2 + 1)
mm2 = 0xffffffffffffffff / (0x4 + 1)
mm4 = 0xffffffffffffffff / (0x10 + 1)
mm8 = 0xffffffffffffffff / (0x100 + 1)
mm16 = 0xffffffffffffffff / (0x10000 + 1)
mm32 = 0xffffffffffffffff / (0x100000000 + 1)
)
n = (n & m1) + ((n >> 1) & m1) //put count of each 2 bits into those 2 bits
n = (n & m2) + ((n >> 2) & m2) //put count of each 4 bits into those 4 bits
n = (n & m4) + ((n >> 4) & m4) //put count of each 8 bits into those 8 bits
n = (n & m8) + ((n >> 8) & m8) //put count of each 16 bits into those 16 bits
n = (n & m16) + ((n >> 16) & m16) //put count of each 32 bits into those 32 bits
n = (n & m32) + ((n >> 32) & m32) //put count of each 64 bits into those 64 bits
return n
}
```
*test*
```golang
const N = 1<<62 + 5543445554
func TestStupidCount(t *testing.T) {
fmt.Println(StupidCount(N))
}
func TestNaiveCount(t *testing.T) {
fmt.Println(NaiveCount(N))
}
func TestSparseCount(t *testing.T) {
fmt.Println(SparseCount(N))
}
func TestDenseCount(t *testing.T) {
fmt.Println(DenseCount(N))
}
func TestLookupCount(t *testing.T) {
fmt.Println(LookupCount(N))
}
func TestParallelCount(t *testing.T) {
fmt.Println(ParallelCount(N))
}
func BenchmarkLookupCount(b *testing.B) {
for i := 0; i < b.N; i++ {
LookupCount(N)
}
}
func BenchmarkStupidCount(b *testing.B) {
for i := 0; i < b.N; i++ {
StupidCount(N)
}
}
func BenchmarkNaiveCount(b *testing.B) {
for i := 0; i < b.N; i++ {
NaiveCount(N)
}
}
func BenchmarkSparseCount(b *testing.B) {
for i := 0; i < b.N; i++ {
SparseCount(N)
}
}
func BenchmarkDenseCount(b *testing.B) {
for i := 0; i < b.N; i++ {
DenseCount(N)
}
}
func BenchmarkParallelCount(b *testing.B) {
for i := 0; i < b.N; i++ {
ParallelCount(N)
}
}
```
*benchmark*
```
goos: windows
goarch: amd64
pkg: github.com/d0zingcat/labs/popcount
BenchmarkLookupCount-4 2000000000 0.31 ns/op
BenchmarkStupidCount-4 20000000 107 ns/op
BenchmarkNaiveCount-4 30000000 38.9 ns/op
BenchmarkSparseCount-4 200000000 6.93 ns/op
BenchmarkDenseCount-4 30000000 41.2 ns/op
BenchmarkParallelCount-4 2000000000 0.31 ns/op
PASS
ok github.com/d0zingcat/labs/popcount 8.252s
```
as to lookup popcount, the approach is use space to exchange time
thus, parallel pop count uses devide-and-conquer strategy to count.
*leetcode*
```
191. Number of 1 Bits
Write a function that takes an unsigned integer and returns the number of '1' bits it has (also known as the Hamming weight).
Example 1:
Input: 11
Output: 3
Explanation: Integer 11 has binary representation 00000000000000000000000000001011
Example 2:
Input: 128
Output: 1
Explanation: Integer 128 has binary representation 00000000000000000000000010000000
```
*Solution*
```python
class Solution(object):
def hammingWeight(self, n):
"""
:type n: int
:rtype: int
"""
count = 0
while n:
count += 1
n = n & (n-1)
return count
```
*Refer*
[Hamming weight](https://en.wikipedia.org/wiki/Hamming_weight)
[Bit-counting algorithms](https://bisqwit.iki.fi/source/misc/bitcounting/#SourceCode)
[Fast Bit Counting](https://gurmeet.net/puzzles/fast-bit-counting-routines/)
[Calculating Hamming Weight in O(1)](https://stackoverflow.com/questions/15233121/calculating-hamming-weight-in-o1)
[Hamming Weight的算法分析](https://www.cnblogs.com/jawiezhu/p/4395063.html)
[popcount 算法分析](http://www.cnblogs.com/Martinium/articles/popcount.html)
[Hamming Weight的算法分析转载](http://www.cnblogs.com/yongssu/p/4348479.html)
[Fermat number](https://en.wikipedia.org/wiki/Fermat_number)
## Sieve of Eratosthenes derived from popcount
> In mathematics, the sieve of Eratosthenes is a simple, ancient algorithm for finding all prime numbers up to any given limit.
> It does so by iteratively marking as composite (i.e., not prime) the multiples of each prime, starting with the first prime number, 2. The multiples of a given prime are generated as a sequence of numbers starting from that prime, with constant difference between them that is equal to that prime. This is the sieve's key distinction from using trial division to sequentially test each candidate number for divisibility by each prime.
*Two Approaches to get prime table*
1. normal
*code*
```golang
func PrimeNumbers(n int) (ans []int) {
for i := 2; i < n; i ++ {
if isPrime(i) {
ans = append(ans, i)
}
}
return ans
}
func isPrime(n int) bool {
for i := 2; i < int(math.Sqrt(float64(n))) + 1; i ++ {
if n % i == 0 {
return false
}
}
return true
}
```
*test*
```golang
func BenchmarkPrimeNumbers(b *testing.B) {
for i := 0; i < b.N; i ++ {
PrimeNumbers(1000)
}
}
```
*benchmark*
```
goos: darwin
goarch: amd64
pkg: github.com/d0zingcat/learning/leetcode/primenumber/normal
BenchmarkPrimeNumbers-8 20000 63843 ns/op
PASS
```
2. eratosthenes
*code*
```golang
func PrimeTable(n int) (ans []int) {
var a []int
for i := 0; i < n; i ++ {
a = append(a, i)
}
for i := 2; i < n; i++ {
if a[i] != 0 {
ans = append(ans, a[i])
}
for j := i*2; j < n; j += i {
a[j] = 0
}
}
return
}
```
*test*
```golang
func BenchmarkPrimeTable(b *testing.B) {
for i := 0; i < b.N; i ++ {
PrimeTable(1000)
}
}
```
*benchmark*
```
goos: darwin
goarch: amd64
pkg: github.com/d0zingcat/learning/leetcode/primenumber/eratosthenes
BenchmarkPrimeTable-8 100000 10839 ns/op
PASS
```
**The bigger n is, the more time the normal algorithm consumes each op!**
Appearantly, use the prime table and sieve is the best way to get a bunch of primes that is smaller than n.
*Refer*
[Euclidean algorithm](https://en.wikipedia.org/wiki/Euclidean_algorithm)
[Sieve of Eratosthenes](https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes)

View File

@@ -0,0 +1,34 @@
---
title: "2018-10-24"
date: 2018-11-03T19:04:17+08:00
draft: true
---
> 本周继续学习GOPL在讲到defer语句的时候提到了很有意思的一个概念是Linux中的[file descriptor](https://en.wikipedia.org/wiki/File_descriptor)。简单来讲linux在系统中都有一个唯一的id指向某个文件例如0指向标准输入1指向标准输出2指向标准错误3...从3开始直接指向到开启文件。最大1024换言之如果同时开启的文件太多超过1021那么就会出现Linux File Descriptor 耗尽的问题。以下为全文:
**File Descriptor**
一个打开的文件通过唯一的描述符进行引用该描述符是打开文件的元数据到文件本身的映射。在Linux内核中这个描述符称为文件描述符FileDescriptor文件描述符用一个整数表示C语言中的类型为int简写为fd。文件描述符在用户空间相对于内核空间而言也就是我们应用程序的那层中共享允许用户程序用文件描述符直接访问文件。同一个文件能被不同或者相同的进程多次打开每一个打开文件的实例也就是java中的File类对象吧都产生一个唯一的文件描述符。同一个描述符可以被多个进程使用。不同的进程能同时对一个文件进行读写所以存在并发修改问题。
<!--more-->
每个进程都有个映射文件物理地址的表格0标准输入 1标准输出 2标准错误 3、4、5....就映射你打开过文件的地址)
进程间怎么通过FileDescriptor共享文件: 子进程继承父进程 地址表格的父进程调用fork 生成子进程)
内核为每一个进程维护一个打开文件的列表称为文件表File Table索引是fd数据为打开文件的信息包括一个指向文件的Inode对象的指针和相关元数据如当前文件文职、读取模式。(Inode包含文件的物理地址) 简单的讲也就是有了Map结构key是fdvalue是文件的信息包括物理地址、读取模式。。。。。
子进程默认获得一份父进程FIle Table的拷贝而 更改一个进程的FileTable不会影响另一个进程如果子进程关闭了文件 不会影响父进程的File Table所以fd可以用来共享文件。fd用C语言的int表示非负整数从0开始递增 直到默认上限1024。按照惯例每个进程至少有三个打开的文件描述符除非进程显式的关闭他们 0 标准输入stdin1标准输出stdout2标准错误stderr
*Conclusion*
1. fd的操作实际是系统内核API层次的java的标准API没有直接相关的操作据我已知可以直接通过fd打开文件的
2. 进程关联一个类似Map的文件表key是fdvalue是文件的物理地址等等信息。
3. 可以通过fd打开文件。
4. 子进程copy父进程的文件表
*Refer*
*Linux System Programming*
[FileDescriptor文件描述符 与Linux文件系统](https://blog.csdn.net/zhjali123/article/details/72566685)

541
source/_posts/2018-11-18.md Normal file
View File

@@ -0,0 +1,541 @@
---
title: "2018-11-18"
date: 2018-11-18T12:55:09+08:00
draft: true
---
这周刚巧基友在爬国学网站爬出来的都是json于是他想到了存在mongodb中然后再导出为PDF。因为他跟我提了这件事情联想到腾讯的招聘要求中有一条加分项就是了解过mongodb心想自己也得去研究一下于是最后也没看mongo是怎么玩的是么【逃】。于是准备安装。在mac下直接使用`brew install mongodb`结果炸了:
```
php@7.0
mongodb: A full installation of Xcode.app 8.3.2 is required to compile this software.
Installing just the Command Line Tools is not sufficient.
Xcode 8.3.2 cannot be installed on macOS 10.11.
You must upgrade your version of macOS.
Error: An unsatisfied requirement failed this build.
```
<!--more-->嗯欺负我是黑苹果还是EI Capitan 也就是10.11没法升级没法安装最新的Xcode。嗯只能另辟蹊径那我装个docker吧。`brew install docker` 一切正常但是当pull的时候提示unix socket未启动我一想这不就是daemon线程没起么各种命令敲完之后发现好像自己犯了个错没有装全组件。brew search一下还真不少不知道该装哪个了。Google之后发现要装docker-machine,docker-composer还有一堆虚拟化的东西要配觉得好麻烦就直接翻docker官网发现有现成的docker app下载直接下载解压打开提示我一定要10.12之后才行我一口老血。删除之另找方法。突然转念一想我不是有虚拟机么虚拟机里面装个在宿主机上连接就行啦。果断尝试刚好我的ubuntu cosmic还是新鲜出炉的流畅地一塌糊涂装起东西来也是飞快不过当时用linuxbrew安装docker的时候也陷入了窘境因为不知道该装哪个毕竟brew是从OSX移植过来的毕竟原始是针对Mac平台的。只能去翻docker的官方文档步骤如下
```bash
sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo apt-key fingerprint 0EBFCD88
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt update
sudo apt install docker-ce
```
但是当这些操作完了之后还是没能安装上docker软件源中压根没有这个。查了下发现因为ubuntu cosmic用的是debian 10的内核换言之lsb_release -cs 这个命令取到的是buster这个版本太新了docker还没有适配。然后在这边找到了答案其实很简单[把系统版本直接写死](https://github.com/docker/for-linux/issues/442),也就是想这样: `sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"` 即可开心地食用起码我还没遇到啥不能用的bug。另外值得一提的是parallels使用默认的shared network模式即可实现宿机主机之间通讯其实通过ifconfig查看也能发现端倪
```
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
ether c4:17:fe:d4:d9:71
inet6 fe80::c617:feff:fed4:d971%en0 prefixlen 64 scopeid 0x5
inet 192.168.50.201 netmask 0xffffff00 broadcast 192.168.50.255
nd6 options=1<PERFORMNUD>
media: autoselect
status: active
p2p0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 2304
ether 06:17:fe:d4:d9:71
media: autoselect
status: inactive
vnic0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
options=3<RXCSUM,TXCSUM>
ether 00:1c:42:00:00:08
inet 10.211.55.2 netmask 0xffffff00 broadcast 10.211.55.255
media: autoselect
status: active
vnic1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
options=3<RXCSUM,TXCSUM>
ether 00:1c:42:00:00:09
inet 10.37.129.2 netmask 0xffffff00 broadcast 10.37.129.255
media: autoselect
status: active
```
vnic0就是parallels虚拟出来的网卡了类似的宿机上也有这么一块虚拟网卡。不过一开始调试的时候并不痛捣鼓了半天才发现是我之前偷懒在linux的.profile文件用户登录时自动加载一次但是alias命令貌似无效中export了http_proxy和https_proxy所以所有的流量都走代理了当访问10.xxx的时候走了代理所以就不通了。
既然踩完了这几个坑自然就想这样要总结一下所以想要写个博客记录一下。突然想到一直以来欠着的就是一个静态资源服务器。先不说现在的各种OSS有多坑对象存储机制下的key-value管理非常地不方便当想要迁移的时候也总是因为各种适配性问题各种组织不同导致以前写博客时的图片全都很难维护或者失去维护除了这一点我自己的服务器已经是CN2 GIA线路了速度应该还是可以的很快的所以完全可以用静态文件托管的方式来维护博客相关的图片和附件。
能想到的方案有golang写一个文件服务器然后自动去掉前缀进行访问例如https://blog.d0zingcat.xyz/fileserver/aaa/xxx.png 就访问对应的静态目录下的aaa文件夹下的xxx.png文件自动移除fileserver前缀。这么做的好处是通过访问网址来判断是否访问静态资源服务器而对应的文件直接食用rsync命令同步到服务器上即可而且因为走的还是blog.d0zingcat.xyz这个域名所以可以沿用这个域名的https证书。但是坏处是静态资源依赖于这个域名存在误导性。而且还有个问题是这个需要写nginx的location规则然额我nginx玩的少不会写。所以只能放弃。那么文件就很简单了只有新搞一个域名申请上https证书因为我的博客开了全站https且是强制开启的如果引入http的静态资源的话会没法加载
但是主要矛盾是穷就跟上面想到用docker来玩mongodb一个意思我觉得再为单独一个域名来申请证书代价有点昂贵而这个域名还是我心血来潮搞的。所以就想到免费的https证书签发组织[letsencrypt](https://letsencrypt.org/)。但是之前用这个都是用的官方的工具certbot非常地庞大和臃肿不够清真。所以就想到了以前在一个博主现在记不得网址了那边看到[dehydrated](https://github.com/lukas2511/dehydrated)这个工具可以很轻便地签发ACME证书。食用方法差不多如下
1. 克隆[dehydrated](https://github.com/lukas2511/dehydrated)项目
2. 在dehydrated目录下创建config文本文件示例如下、challenge文件夹、domains.txt文本文件示例如下
3. 需要配置对应的nginx。如果没有可能需要装一个。配置nginx跳转访问well-known目录下的对应文件。示例如下。但是值得注意的是配置了^~这条规则就不要配置普通的 location/ {} 这个块否则会冲突导致无法正常访问到well-known目录下的文件。当然也可以在这个目录下新建一个test.txt随便写点什么方便测试看是否配置正确。至于是什么机理导致的我也不清楚。
4. 需要映射对应的路径到容器中,命令为 `docker run --rm -d -v /home/d0zingcat/nginx/nginx.conf:/etc/nginx/nginx.conf -v /home/d0zingcat/dehydrated/challenge:/var/www/dehydrated -p 80:80 -p 443:443 nginx`,通过这个来启动对应的容器。/home/d0zingcat是我的$BASEDIR另外包括dehydrated也在这个目录下。
5. 使用命令 `./dehydrated --register --accept-terms` 来完成注册
6. 使用命令 `./dehydrated -c` 来完成签约
*我的config*
值得注意的是CA字段在自己尝试的时候最好改成带“-staging-”的这个,不然可能会因为测试次数频繁而被封(我是没达到过,以前可能有);
另外就是如果使用了staging的这个CA且测试成功签发了对应的证书那也没啥用的不能拿来部署的只是测试用。但是如果就简单改一下CA再重新签发还是会续签这个测试证书比较好的解决方法就是直接把dehydrated目录下的accounts、chains、certs目录全部都移除。
我也试过使用自签的csr和private key但是签约POST的时候失败了400不是合理的JSON没有深究也懒得管了。如果你有兴趣那么尽可以复现这个问题然后去项目下面提一个issue好了。
```
########################################################
# This is the main config file for dehydrated #
# #
# This file is looked for in the following locations: #
# $SCRIPTDIR/config (next to this script) #
# /usr/local/etc/dehydrated/config #
# /etc/dehydrated/config #
# ${PWD}/config (in current working-directory) #
# #
# Default values of this config are in comments #
########################################################
# Which user should dehydrated run as? This will be implictly enforced when running as root
DEHYDRATED_USER=d0zingcat
# Which group should dehydrated run as? This will be implictly enforced when running as root
DEHYDRATED_GROUP=d0zingcat
# Resolve names to addresses of IP version only. (curl)
# supported values: 4, 6
# default: <unset>
#IP_VERSION=
# Path to certificate authority (default: https://acme-v02.api.letsencrypt.org/directory)
CA="https://acme-v02.api.letsencrypt.org/directory"
#CA="https://acme-staging-v02.api.letsencrypt.org/directory"
# Path to old certificate authority
# Set this value to your old CA value when upgrading from ACMEv1 to ACMEv2 under a different endpoint.
# If dehydrated detects an account-key for the old CA it will automatically reuse that key
# instead of registering a new one.
# default: https://acme-v01.api.letsencrypt.org/directory
#OLDCA="https://acme-v01.api.letsencrypt.org/directory"
# Which challenge should be used? Currently http-01, dns-01 and tls-alpn-01 are supported
CHALLENGETYPE="http-01"
# Path to a directory containing additional config files, allowing to override
# the defaults found in the main configuration file. Additional config files
# in this directory needs to be named with a '.sh' ending.
# default: <unset>
#CONFIG_D=
# Directory for per-domain configuration files.
# If not set, per-domain configurations are sourced from each certificates output directory.
# default: <unset>
#DOMAINS_D=
# Base directory for account key, generated certificates and list of domains (default: $SCRIPTDIR -- uses config directory if undefined)
BASEDIR=$SCRIPTDIR
# File containing the list of domains to request certificates for (default: $BASEDIR/domains.txt)
DOMAINS_TXT="${BASEDIR}/domains.txt"
# Output directory for generated certificates
CERTDIR="${BASEDIR}/certs"
# Output directory for alpn verification certificates
#ALPNCERTDIR="${BASEDIR}/alpn-certs"
# Directory for account keys and registration information
ACCOUNTDIR="${BASEDIR}/accounts"
# Output directory for challenge-tokens to be served by webserver or deployed in HOOK (default: /var/www/dehydrated)
WELLKNOWN="${BASEDIR}/challenge"
# Default keysize for private keys (default: 4096)
#KEYSIZE="4096"
# Path to openssl config file (default: <unset> - tries to figure out system default)
#OPENSSL_CNF=
# Path to OpenSSL binary (default: "openssl")
#OPENSSL="openssl"
# Extra options passed to the curl binary (default: <unset>)
#CURL_OPTS=
# Program or function called in certain situations
#
# After generating the challenge-response, or after failed challenge (in this case altname is empty)
# Given arguments: clean_challenge|deploy_challenge altname token-filename token-content
#
# After successfully signing certificate
# Given arguments: deploy_cert domain path/to/privkey.pem path/to/cert.pem path/to/fullchain.pem
#
# BASEDIR and WELLKNOWN variables are exported and can be used in an external program
# default: <unset>
#HOOK=
# Chain clean_challenge|deploy_challenge arguments together into one hook call per certificate (default: no)
#HOOK_CHAIN="no"
# Minimum days before expiration to automatically renew certificate (default: 30)
#RENEW_DAYS="30"
# Regenerate private keys instead of just signing new certificates on renewal (default: yes)
#PRIVATE_KEY_RENEW="yes"
# Create an extra private key for rollover (default: no)
#PRIVATE_KEY_ROLLOVER="no"
# Which public key algorithm should be used? Supported: rsa, prime256v1 and secp384r1
#KEY_ALGO=rsa
# E-mail to use during the registration (default: <unset>)
#CONTACT_EMAIL=
# Lockfile location, to prevent concurrent access (default: $BASEDIR/lock)
#LOCKFILE="${BASEDIR}/lock"
# Option to add CSR-flag indicating OCSP stapling to be mandatory (default: no)
#OCSP_MUST_STAPLE="no"
# Fetch OCSP responses (default: no)
#OCSP_FETCH="no"
# OCSP refresh interval (default: 5 days)
#OCSP_DAYS=5
# Issuer chain cache directory (default: $BASEDIR/chains)
#CHAINCACHE="${BASEDIR}/chains"
# Automatic cleanup (default: no)
#AUTO_CLEANUP="no"
# ACME API version (default: auto)
#API=auto
```
*我的domains*
我只签单域名的且不涉及别名和通配所以这边看起来会比较简单一些。你可以看项目的docs下的domains_txt.md这个文档有很详尽的介绍。
```
files.d0zingcat.xyz
```
*我的nginx.conf*
```
server {
listen 80;
server_name files.d0zingcat.xyz;
location ^~ /.well-known/acme-challenge {
alias /var/www/dehydrated;
}
// start
location / {
root /var/www/blog.d0zingcat.xyz;
index index.html;
}
// end
}
```
签约s协议完成之后会在dehydrated目录下看到多了account、chains、certs三个文件夹但是我们只用的到certs文件夹。里面会有注册的域名比如我的files.d0zingcat.xyz。进去找到privkey.pem私钥和fullchain.pem注册密钥复制出去到对应的nginx目录下为了方便区分特地修改了名字。然后修改docker的启动脚本为`docker run --add-host="localhost:172.17.0.1" --rm -d -v /home/d0zingcat/nginx/nginx.conf:/etc/nginx/nginx.conf -v /home/d0zingcat/nginx/data/files.d0zingcat.xyz.key:/var/www/https/files.d0zingcat.xyz.key -v /home/d0zingcat/nginx/data/files.d0zingcat.xyz.crt:/var/www/https/files.d0zingcat.xyz.crt -v /home/d0zingcat/dehydrated/challenge:/var/www/dehydrated -p 80:80 -p 443:443 nginx`nginx对应的配置为
```
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
##
# SSL Settings
##
ssl_protocols TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
#server {
# listen 80;
# server_name files.d0zingcat.xyz;
# location ^~ /.well-known/acme-challenge {
# alias /var/www/dehydrated;
# }
# #location / {
# # root /var/www/blog.d0zingcat.xyz;
# # index index.html;
# #}
#}
server {
listen 80;
server_name files.d0zingcat.xyz;
return 302 https://$host$request_uri;
}
server {
listen 443 http2 ssl;
ssl_certificate /var/www/https/files.d0zingcat.xyz.crt;
ssl_certificate_key /var/www/https/files.d0zingcat.xyz.key;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
keepalive_timeout 70;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_session_tickets on;
ssl_stapling on;
ssl_stapling_verify on;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:ECDHE-RSA-DES-CBC3-SHA:ECDHE-ECDSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
location / {
#root /var/www/blog.d0zingcat.xyz;
#index index.html;
#proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://localhost:9000/;
}
server_name files.d0zingcat.xyz;
access_log /var/log/nginx/nginx.vhost.access.log;
error_log /var/log/nginx/nginx.vhost.error.log;
}
}
```
这个尽可照着改就是不会有太大的问题然后至此重启服务器就完成了服务器端的https的访问。就是90分钟要激活一次那也挺累的。虽然隐隐约约还记得有hook机制但是没精力去搞了顶多就是到时候90天续期一次嘛。可以看道nginx的配置文件有反代了localhost:9000这个地址这个就是我拿go写的很简单的静态资源服务器啦源代码如下
```golang
package main
import (
"net/http"
"os"
)
func main() {
args := os.Args[1:]
source := ""
cert := ""
key := ""
port := ""
if len(args) > 1 {
port = args[0]
source = args[1]
cert = args[2]
key = args[3]
} else {
port = "9000"
source = "tmp"
cert = "file.d0zingcat.xyz.pem"
key = "file.d0zingcat.xyz.key"
}
handler := getHandler(source)
http.ListenAndServe(":"+port, handler)
http.ListenAndServeTLS(":"+port, cert, key, handler)
}
func getHandler(dir string) http.Handler {
return http.FileServer(http.Dir((dir)))
}
```
编译了拿到服务器上使用命令`nohup ./directory-browser "static" "/home/d0zingcat/blog/data/blog.d0zingcat.xyz.crt" "/home/d0zingcat/blog/data/blog.d0zingcat.xyz.key" &`来运行静态资源需要提前建好static文件夹。然后本以为这个起好或者nginx也启动好那么久万事大吉了。但是一次又一次的失败502 bad gateway让我焦头烂额不知道怎么处理。突然就有了一个要检查的项不知道咋回事。但是
把[GCTT的golang系列教程](https://studygolang.com/subject/2)全部看完了因为直接看GOPL实在是太慢了一方面全英文理解起来没有中文快另外就是大师就是大师书籍中的知识深度、广度都非常深远、辽阔涉及了太多的点就算泛读要一个个弄清楚比如我就没有做书后的exercise实在太废劲就先拿一个浅显易懂的教程整体撸一遍。当然如果英语好的话可以看[原文](https://golangbot.com/learn-golang-series/)。
因为总算看到了golang的信号量和协程就写了个~~爬图的脚本~~看看多线程的威力。记得以前写的爬图脚本是用的pythonbs4单线程爬了要有好几个小时当然也跟电脑配置网络环境有关系。但是这次写的是直接解析html正则匹配出来之后直接多线程下载1600张图片一共耗时2分半非常的快而且失败率也很低。关键代码可以见这个
```golang
package spider
import (
"fmt"
"io/ioutil"
"net/http"
"os"
"path/filepath"
"regexp"
"strconv"
"strings"
"sync"
"github.com/d0zingcat/go-logger/logger"
)
var pagesCount int
var failedUrls []string
var mu *sync.Mutex = &sync.Mutex{}
func init() {
logger.SetRollingFile(".", "spider.log", 10, 50, logger.MB)
logger.SetLevel(logger.DEBUG)
htmlBytes, err := reqPage(HOME_URL)
if err != nil {
logger.Error("Get total page count failed!")
panic(err)
}
re := regexp.MustCompile(`<span aria-current='page' class='page-numbers current'>(\d+)</span>`)
pagesMatch := re.FindAllStringSubmatch(string(htmlBytes), -1)
if len(pagesMatch) > 0 && len(pagesMatch[0]) > 1 {
page := pagesMatch[0][1]
pagesCount, err = strconv.Atoi(page)
if err != nil {
logger.Error("Can not convert page number")
panic(err)
}
}
}
func Process(n int, dir string) {
count := pagesCount
flag := make([]int, pagesCount+1)
ch := make(chan int)
i := 1
for ; i <= pagesCount-n; i += n {
go dispatch(i, i+n, ch, dir)
}
go dispatch(i, pagesCount+1, ch, dir)
for count > 0 {
flag[<-ch] = 1
count--
}
logger.Info("Fail to get these urls: ", failedUrls)
}
func dispatch(start, end int, ch chan int, dir string) {
for i := start; i < end; i++ {
dynUrl := fmt.Sprintf(TEMPLATE_URL, i)
content, err := reqPage(dynUrl)
if err != nil {
logger.Error("Req ", dynUrl, " error!")
}
content = strings.Replace(content, "\r\n", "", -1)
content = strings.Replace(content, "\r", "", -1)
content = strings.Replace(content, "\n", "", -1)
re := regexp.MustCompile(`<li class="comment byuser(.*?</li>)`)
comments := re.FindAllString(content, -1)
for _, item := range comments {
re := regexp.MustCompile(`<img src="(.+?)".*?/>`)
imgs := re.FindAllStringSubmatch(item, -1)
err := storePic(imgs[0][1], dir, strconv.Itoa(i))
if err != nil {
// logger.Error(err)
continue
}
}
ch <- i
}
}
func storePic(url, location, prefix string) error {
if _, err := os.Stat(location); os.IsNotExist(err) {
err = os.Mkdir(location, 0744)
if err != nil {
logger.Error("create dir failed!")
return fmt.Errorf("Dir create fail")
}
}
ss := strings.Split(url, "/")
filename := ss[len(ss)-1]
resp, err := http.Get(url)
if err != nil {
logger.Error("Fail to request the pic: ", url)
failedUrls = conAppendSlice(failedUrls, url)
return err
}
bodyBytes, err := ioutil.ReadAll(resp.Body)
if err != nil {
logger.Error("Fail to read pic response: ", url)
failedUrls = conAppendSlice(failedUrls, url)
return err
}
err = ioutil.WriteFile(filepath.Join(location, prefix+"-"+filename), bodyBytes, 0744)
if err != nil {
logger.Error("Store pic failed: ", url)
failedUrls = conAppendSlice(failedUrls, url)
return err
}
return nil
}
func conAppendSlice(s []string, e string) []string {
mu.Lock()
s = append(s, e)
mu.Unlock()
return s
}
func reqPage(url string) (string, error) {
resp, err := http.Get(url)
if err != nil {
logger.Error("Fail to request the page")
return "", err
}
htmlBytes, err := ioutil.ReadAll(resp.Body)
if err != nil {
logger.Error("Fail to read response")
return "", err
}
return string(htmlBytes), nil
}
```
在看GOPL的过程中呢看到了struct中匿名嵌入其他结构的用法方法自推导的机制方法也可以作为一个变量实现了类似于托管的机制。不过在往下推进的时候看到了Bit Vector书中源码如下
```golang
// An IntSet is a set of small non-negative integers.
// Its zero value represents the empty set.
type IntSet struct {
words []uint64
}
// Has reports whether the set contains the non-negative value x.
func (s *IntSet) Has(x int) bool {
word, bit := x/64, uint(x%64)
return word < len(s.words) && s.words[word]&(1<<bit) != 0
}
// Add adds the non-negative value x to the set.
func (s *IntSet) Add(x int) {
word, bit := x/64, uint(x%64)
for word >= len(s.words) {
s.words = append(s.words, 0)
}
s.words[word] |= 1 << bit
}
// UnionWith sets s to the union of s and t.
func (s *IntSet) UnionWith(t *IntSet) {
for i, tword := range t.words {
if i < len(s.words) {
s.words[i] |= tword
} else {
s.words = append(s.words, tword)
}
}
}
```
有点没看明白这个intset的机制于是开始查资料。
Hashmap
- hashmap不线程安全、效率高、keyvalue可以为null
- hashset线程安全、效率低、keyvalue不能为空
Hashmap是一个数组链表容量始终都是2的N次方大于当前实际负载阀值负载因子容量当实际容量大于阀值时就会进行扩容。
put时如果key为null则取第一个bucket的entry并根据链表一直往下找到key为null的将value设为对应的值否则新建一个entry。
如果key不为null首先根据hashCode()方法获取key的哈希值然后使用indexFor(hash,table.length)获取table中要存放的位置并存储。当两个元素的key的hashcode相同时则在该位置上存放新的entry并且指向原有的entry。所以对于一个table的某个位置而言可能会存放多个键值对但是最新的在最前面。
get时如果key为null则调用getForNullKey()方法否则根据hash函数取到hash值并根据indexFor()得到i的值之后便利链表。如果有则返回对应的value。

368
source/_posts/2019-01-27.md Normal file
View File

@@ -0,0 +1,368 @@
---
title: "2019.01.27"
date: 2019-01-27T18:26:00+08:00
draft: true
---
> 生活总是充满了无穷无尽的困难,以及希望。
最近越发地清楚地意识到了自己换工作是多么迫切的一件事情虽然18年的3月份就有此打算但是不知道为何精力不够或者说自控力不够强导致一直延期延期或者得过且过一直没能下定决心。通过又是一年的工作平时的工作内容考虑到薪资待遇以及自己能力的成长越发觉得上海某地方银行的信用卡中心是个不宜久留的地方不然自己的人生都得搭进去。简单分析有以下几个原因
<!--more-->
1. 管理和“大”公司病。只注重文档和管理形式而非真的管理,管理异常混乱,各个团队各自为政,有能力的人得不到话语权反而是小丑一样只会装逼的人混的各种风生水起,排资论辈,各种规章流程规范满天飞,极度拖慢了各种事项的进展,同时也给开发人员带来了无穷无尽的杂活,常常忙了一天也不知道自己一天究竟做了什么,这是对生命的浪费。
2. 薪资。当然,这个是我自己不够努力,在该学习的时候不学习没有一个好的学历和出身,所以显得非常廉价。但是我知道自己不只这个价钱,告诫自己:千里马拉习惯了磨,也就觉得自己是骡子。我相信自己的价值和能力。
3. 团度氛围。本来刚入职的时候觉得整个项目组还是比较像一个大家庭的,但是莫名其妙的矛盾冲突越来越对,跟自己的项目组是这样,跟业务也是,跟测试也是,脾气越来越差,管理对自己的剥削越来越对,各种铁面制度或是瞎指挥,越来越让我觉得这边已经没有啥值得我留恋的。
3. 技术成长。感觉自己技术的成长空间已经基本没有了(虽然浩哥~~~~@404notfound还有很多很多值得我学的但是目前为止我并不想走单纯后台java业务开发的线路所以是时候走了。
4. 还是要多看看别人的公司和氛围以及别人在做的事情的,总是重复一件事情慢慢地都忽视了自己一直没有进步。
今天(本篇文章也是拖延症晚期系列的产出)刷微博的时候发现离职根本不需要这么多理由(请原谅我加戏了。。):
![](https://files.d0zingcat.xyz/blog/posts/2019-01-27/705a5ff0gy1fxwr1xtdfxj20yp0liwhk.jpg)
![](https://files.d0zingcat.xyz/blog/posts/2019-01-27/b8b73ba1ly1fz3n2xsen1j20zl0qotfb.jpg)
因此,下文将记录一些我在准备面试或者说各种自己以前欠下的债的弥补内容。
## Tree(s)
- treesort
*参考:* GOPL Page 101 ch4/treesort
代码实现如:
```golang
package treesort
type Tree struct {
val int
left, right *Tree
}
func Add(root *Tree, val int) *Tree {
if root == nil {
t := new(Tree)
t.val = val
return t
}
if val < root.val {
root.left = Add(root.left, val)
} else {
root.right = Add(root.right, val)
}
return root
}
func AppendValues(root *Tree, values []int) []int {
if root != nil {
values = AppendValues(root.left, values)
values = append(values, root.val)
values = AppendValues(root.right, values)
}
return values
}
func Sort(values []int) []int {
var root *Tree
for _, v := range values {
root = Add(root, v)
}
values = AppendValues(root, values[:0])
return values
}
```
- Binary search tree
*参考:* [多动态图详细讲解二叉搜索树](https://lufficc.com/blog/binary-search-tree)
[二叉查找树BST](http://songlee24.github.io/2015/01/13/binary-search-tree/)
代码实现如:
```golang
package bst
/**
二叉查找树的go实现参考 https://lufficc.com/blog/binary-search-tree 及 http://songlee24.github.io/2015/01/13/binary-search-tree/
完成编写
*/
import (
"fmt"
"math"
"math/rand"
"strconv"
"time"
)
type Node struct {
key int
left *Node
right *Node
parent *Node
}
type Tree struct {
root *Node
}
type ITree interface {
New() ITree
Insert(k int)
Search(k int) *Node
Delete(n *Node)
Min() *Node
Max() *Node
TraversePreorder()
TraverseInorder()
TraversePostorder()
String() string
}
func (t *Tree) String() string {
return walk(t.root, 0)
}
func walk(n *Node, nos int) string {
var s string
if n == nil {
return ""
}
// fmt.Printf("%d\t%+v\t%+v\t%+v\n", nos, n, n.left, n.right)
if n.parent == nil {
s += strconv.Itoa(n.key) + "\n"
}
if n.left != nil {
if nos != 0 {
s += fmt.Sprintf("%*c", nos, ' ')
}
s += strconv.Itoa(n.left.key)
}
if n.right != nil {
s += fmt.Sprintf("%*c", 4+nos, ' ')
s += strconv.Itoa(n.right.key)
}
s += "\n"
s += walk(n.left, nos)
bon := 0
if n.left != nil {
bon = int(math.Log10(float64(n.left.key))) + 1
}
s += walk(n.right, nos+4+bon)
return s
}
func (t *Tree) Delete(n *Node) {
if n == nil {
return
}
if n.left == nil && n.right == nil {
if n.parent != nil {
if n.parent.left == n {
n.parent.left = nil
} else {
n.parent.right = nil
}
} else {
t = nil
}
} else if n.left != nil && n.right == nil {
n.left.parent = n.parent
if n.parent != nil {
if n.parent.left == n {
n.parent.left = n.left
} else {
n.parent.right = n.left
}
} else {
t = &Tree{n.left}
}
} else if n.left == nil && n.right != nil {
n.right.parent = n.parent
if n.parent != nil {
if n.parent.left == n {
n.parent.left = n.right
} else {
n.parent.right = n.right
}
} else {
t = &Tree{n.right}
}
} else {
suc := successor(n)
n.key = suc.key
t.Delete(suc)
}
}
func successor(n *Node) *Node {
if n == nil {
return n
}
if n.right != nil {
return min(n.right)
} else {
p := n.parent
for p != nil && p.right == n {
n = p
p = n.parent
}
return p
}
}
func predecessor(n *Node) *Node {
if n == nil {
return n
}
if n.left != nil {
return max(n.left)
} else {
p := n.parent
for p != nil && p.left == n {
n = p
p = n.parent
}
return p
}
}
func (t *Tree) Min() *Node {
return min(t.root)
}
func min(n *Node) *Node {
if n.left == nil {
return n
}
return min(n.left)
}
func (t *Tree) Max() *Node {
return max(t.root)
}
func max(n *Node) *Node {
if n.right == nil {
return n
}
return max(n.right)
}
func createTree() *Tree {
t := new(Tree)
var _ ITree = t
return t
}
func (t *Tree) New() ITree {
return createTree()
}
func (t *Tree) Search(k int) *Node {
return search(t.root, k)
}
func search(n *Node, k int) *Node {
if n == nil || n.key == k {
return n
}
if n.key > k {
return search(n.left, k)
} else {
return search(n.right, k)
}
}
func (t *Tree) Insert(k int) {
if t.root == nil {
t.root = Insert(k, t.root)
} else {
Insert(k, t.root)
}
}
func Insert(k int, n *Node) *Node {
if n == nil {
return &Node{k, nil, nil, n}
}
if k == n.key {
return nil
} else if k > n.key {
if n.right == nil {
n.right = &Node{k, nil, nil, n}
} else {
Insert(k, n.right)
}
} else {
if n.left == nil {
n.left = &Node{k, nil, nil, n}
} else {
Insert(k, n.left)
}
}
return nil
}
func (t *Tree) TraversePreorder() {
fmt.Print("Pre order: ")
traversePreorder(t.root)
fmt.Println()
}
func traversePreorder(n *Node) {
if n == nil {
return
}
fmt.Print(n.key, " ")
traversePreorder(n.left)
traversePreorder(n.right)
}
func (t *Tree) TraverseInorder() {
fmt.Print("In order: ")
traverseInorder(t.root)
fmt.Println()
}
func traverseInorder(t *Node) {
if t == nil {
return
}
traverseInorder(t.left)
fmt.Print(t.key, " ")
traverseInorder(t.right)
}
func (t *Tree) TraversePostorder() {
fmt.Print("Post order: ")
traversePostorder(t.root)
fmt.Println()
}
func traversePostorder(n *Node) {
if n == nil {
return
}
traversePostorder(n.left)
traversePostorder(n.right)
fmt.Print(n.key, " ")
}
func genRandomNums(n int) []int {
var array []int
for i := 0; i < n; i++ {
array = append(array, i+1)
}
r := rand.New(rand.NewSource(time.Now().Unix()))
for i := n - 1; i > -1; i-- {
j := r.Int() % (i + 1)
array[i], array[j] = array[j], array[i]
}
return array
}
```

View File

@@ -0,0 +1,168 @@
---
title: "推荐使用Beancount来记账及部署私服记录"
date: 2019-06-12T22:40:43+08:00
draft: false
---
在[郭大神][1]的《在Google的這四年》系列文章中提到了他是一个重度记账用户他选择使用[beancount][2] 来进行他的日常记账工作于是从这里我接触到了double-entry accounting也就是[复式簿记][3]啦感兴趣的可以自己主动去google具体的原理等其实我也只是一知半解仅停留在知道是啥而已。但是其实账目相平从不同的桶将豆子都来倒去的原理让我耳目一新因为之前自己的记账方式太过原始只是简单记录下自己的花销等。当消费习惯引入了信用卡花呗等贷记方式之后就会发现记账变得非常困难比如还款和具体交易金额无法区分只是能看到自己的花费而不能明白如自己的钱从哪来又到了哪去这个问题。于是我选择尝试一下beancount。
<!--more-->
说白了beancount就是一个基于python的记账工具他定义了一套自己独有的语法让我们可以通过一个文本编辑器就可以轻松地开始记账自己保存好自己的记账文件就不会出现使用记账app数据无法导出或是突然倒闭/开发者跑路数据丢失的问题。主要的参考文档还是[官方的][4],开发者非常勤奋贡献了很多文档,但是可能全是英文的对国内的使用者而言非常的不友好,所以这边主要推荐几个我看的:
[Beancount複式記賬](https://www.byvoid.com/zht/blog/beancount-bookkeeping-1)
[Beancount —— 命令行复式簿记](https://wzyboy.im/post/1063.html)
[beancount 起步](http://morefreeze.github.io/2016/10/beancount-thinking.html)
[Beancount使用经验](http://lidongchao.com/2018/07/20/has_header_in_csv_Sniffer/)
[beancount 简易入门指南](https://yuchi.me/post/beancount-intro/)
[基础认识|利用 Beancount 打造个人的记账系统1](http://freelancer-x.com/82/%E5%9F%BA%E7%A1%80%E8%AE%A4%E8%AF%86%EF%BD%9C%E5%88%A9%E7%94%A8-beancount-%E6%89%93%E9%80%A0%E4%B8%AA%E4%BA%BA%E7%9A%84%E8%AE%B0%E8%B4%A6%E7%B3%BB%E7%BB%9F%EF%BC%881%EF%BC%89/)
也基本算是中文圈中能找到的"大部分"文章了。这边不对语法做过多的介绍请感兴趣的同学自己翻阅这些文章其实还有其他文章写的也很好请自行google写的可比我好多了靠谱多了而且我目前只是停留在记录日常流水消费习惯比较单一支付方式只有支付宝、微信、常用的两张借记卡、两张双币信用卡、花呗基本不用现金偶尔互相发发红包连朋友间的垫付、借款以及其他大牛说的用beancount记录债券投资、折现之类的强大的特性一个都没用到也没有实现其他大牛实现的自动话导入账单举个例子就是郭大神的三篇文章我只看了两篇就急不可耐地开始记起了帐搞起了后面的这些东西可能现在需求还比较单一之后如果有什么新的折腾的话可能会补充在这边我仅介绍我的一些小的tips。
## 采用git管理文件历史
这个比较好解释毕竟git可以追溯到文件的历史版本。我推荐使用[bitbucket](https://bitbucket.org/)的私有仓库,这样就可以保证数据的安全性,也可以通过这个来实现多平台的简单同步(服务器和本地)。
## 开户文件独立开来
我的目录形式为:
```
./ppb(可能意思是personal private beancount)
|__main.bean
|__accounts.bean
```
accounts中只存储开户相关的比如我的就是
```
1970-01-01 open Assets:Cash
1970-01-01 open Assets:Bank:CN:BOCOM
1970-01-01 open Assets:Bank:CN:SPDB
1970-01-01 open Assets:Bank:CN:CMB
1970-01-01 open Assets:Org:CN:ALIPAY
1970-01-01 open Assets:Org:CN:WECHAT
1970-01-01 open Assets:Pension:CN:SH
1970-01-01 open Assets:Provident:CN:SH
1970-01-01 open Liabilities:CreditCard:SPDB
1970-01-01 open Liabilities:CreditCard:CMB
1970-01-01 open Liabilities:ALIPAY
1970-01-01 open Income:Lilith:Salary
1970-01-01 open Income:Redpacket
1970-01-01 open Expenses:Clothing
1970-01-01 open Expenses:Other
1970-01-01 open Expenses:Food
1970-01-01 open Expenses:Transport:Metro
1970-01-01 open Expenses:Transport:Airline
1970-01-01 open Expenses:Transport:Railway
1970-01-01 open Expenses:Transport:Coach
1970-01-01 open Expenses:Transport:Texi
1970-01-01 open Expenses:Housing:Rent
1970-01-01 open Expenses:Housing:Utilities
1970-01-01 open Expenses:Health:Medical
1970-01-01 open Expenses:Love
1970-01-01 open Expenses:Life
1970-01-01 open Expenses:Leave
1970-01-01 open Expenses:Tax
1970-01-01 open Expenses:Cloud
1970-01-01 open Expenses:Entertainment
1970-01-01 open Expenses:Travel
1970-01-01 open Expenses:Electronics
```
分类仅供参考因为我就补了6月起的账单所以还没统计全我的所有消费类别。然后定义完了这个文件之后呢在 `main.bean`中只需新增一行 `include "accounts.bean"`即可实现导入。
## 使用fava做web页面的预览和编辑
### 安装beancount和fava
可以额外说明下beancount和favafava的界面和功能真是比原生强大太多的安装。直接安装我发现会有错误具体错误不贴了可能是python3.7导致的所以需要现在服务器上我使用的Debian 9.4)使用 `sudo apt update && sudo apt install python3.7-dev` 安装最新的python3然后使用`sudo update-alternatives --install /usr/bin/python python /usr/bin/python2.7 1``sudo update-alternatives --install /usr/bin/python python /usr/bin/python3.7 2`来设置python3.7为优先使用,这样使用`python --version`就可以看到默认的版本就是3.7了请尽可能抛弃python2.7现在已经慢慢各种问题不兼容了况且官方明年6月就停止支持了。然后可以使用 `curl -O https://bootstrap.pypa.io/get-pip.py && sudo python get-pip.py`来安装pip。接着再使用 `pip install beancount && pip install fava` 就可以无痛安装beancount和fava啦。
### 鉴权的配置
我本来采用的是vs code安装了beancount 0.3.5的插件编辑然后push到git服务器pull然后fava展示的方案后来发现fava自带的编辑也很好用就直接主力用服务器的了。但是存在一个问题权限。因为fava原生是不支持鉴权的开发者认为这也不是fava应该具备的能力所以我们可以通过[反向代理](https://zh.wikipedia.org/wiki/%E5%8F%8D%E5%90%91%E4%BB%A3%E7%90%86)来实现鉴权,我使用的是`nginx version: nginx/1.14.2`。安装也是一行 `sudo apt install nginx`的事情但是需要外注意的是nginx需要包含http_auth_request_module这个模块否则下面的配置是无效的可以通过 `/usr/sbin/nginx -V | grep --color=red 'http_auth_request_module'`来检查。如果不包含的话请自行google如何开启此模块的支持。当安装好了nginx之后就是需要添加账号了可以通过 `sudo bash -c "echo -n 'alice:' >> /etc/nginx/.htpasswd"`` sudo bash -c "openssl passwd -apr1 >> /etc/nginx/.htpasswd"`
两个命令来实现账号此处就是alice和密码会提示输入的定义也可以定义多个用户可以自行尝试下。这个方法只需要用openssl免去了安装apache2-utils这个东西比较清真。
我先本地创建了git repo然后push到了bitbucket然后在服务器上pull下来的为了方便后续的操作其实可以在服务器上生成一个公钥生成公钥的命令是 `ssh-keygen -t rsa -C “your.email@example.com” -b 4096` ,然后[导入到bitbucket](https://confluence.atlassian.com/bitbucket/set-up-an-ssh-key-728138079.html)。接着直接使用ssh的链接地址就能把bitbucket的私有仓库clone下来而不需要每次提示输入账号密码了要命的是这个输入账号密码的操作还不能通过传参来实现会不方便做文件的自动push
然后需要进入这个目录,比如我就是 `/home/d0zingcat/ppb/` 使用命令 `nohup fava main.bean &`来启动这样默认fava就启动在5000端口了。其实按理说fava通过反代的方式是可以指定prefix但是我怎么尝试都是失败的所以就放弃了。所以这决定了我5000这个端口就只能被fava独占了不过服务器上也只起了这一个服务所以也就无所谓了:P。然后改nginx的配置新增文件/etc/nginx/conf.d/bean.conf编辑如下
```
server {
listen 80;
server_name bean.d0zingcat.xyz;
location / {
auth_basic "d0zingcat's personal area";
auth_basic_user_file /etc/nginx/.htpasswd;
proxy_pass http://localhost:5000/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
#auth_request_set $auth_status $upstream_status;
}
}
```
然后在自己的DNS那边配置对应的域名解析到这台服务器就ok了推荐使用[cloudflare](https://www.cloudflare.com/) 来管理自己的域名解析。重启nginx`sudo systemctl restart nginx` ,访问对应的域名就会发现弹出提示:![](https://files.d0zingcat.xyz/blog/posts/beancount-recommendation/WX20190613-105814@2x.png)
输入完账号和对应的密码之后就能进去看到自己的端口啦。如果尝试输入一个错误的密码则会看到:
![](https://files.d0zingcat.xyz/blog/posts/beancount-recommendation/WX20190613-105833@2x.png)
所以到这基本就完成了反向代理和鉴权的配置。但是我可能觉得这样还不够安全在这个全名https的时代况且有ACME这种好东东所以我决定继续折腾一把加上tls证书。
参考[An ACME Shell script: acme.sh](https://github.com/Neilpang/acme.sh) 的README进行安装其实也就一行 `curl https://get.acme.sh | sh` 就是需要注意的是如果想要使用standalone模式也就是单独起一个http服务不依赖于nginx或者apache那么需要自己安装依赖socat其实也就一行 `sudo apt install socat`就能搞定。安装好了之后直接敲命令,比如我就是` sudo ~/.acme.sh/acme.sh --issue -d bean.d0zingcat.xyz --standalone -k ec-256` 然后就能完成证书的签发记得现在DNS那边把对应的域名解析道这台主机的ip也先提前停止这台机子上占用了80端口的服务
然后就是改nginx配置文件啦比如我的
```
server {
listen 80;
server_name bean.d0zingcat.xyz;
return 302 https://$host$request_uri;
}
server {
listen 443 http2 ssl;
ssl_certificate /root/.acme.sh/bean.d0zingcat.xyz_ecc/fullchain.cer;
ssl_certificate_key /root/.acme.sh/bean.d0zingcat.xyz_ecc/bean.d0zingcat.xyz.key;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
keepalive_timeout 70;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_session_tickets on;
ssl_stapling on;
ssl_stapling_verify on;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:ECDHE-RSA-DES-CBC3-SHA:ECDHE-ECDSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
server_name bean.d0zingcat.xyz;
#root /var/www/html;
location / {
auth_basic "Lee's personal area";
auth_basic_user_file /etc/nginx/.htpasswd;
proxy_pass http://localhost:5000/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
```
对应我的配置替换 *server_name* *ssl_certificate* *ssl_certificate_key* 几个字段就ok。然后重启nginx就大功告成那么一个带s带鉴权的beancount线上服务就完成啦。
![](https://files.d0zingcat.xyz/blog/posts/beancount-recommendation/WX20190613-133123@2x.png)
### 监控文件变化及自动push
程序完成中,待续。
[1]: https://www.byvoid.com/
[2]: https://github.com/beancount/beancount
[3]: https://zh.wikipedia.org/wiki/%E5%A4%8D%E5%BC%8F%E7%B0%BF%E8%AE%B0
[4]: http://furius.ca/beancount/doc/index

View File

@@ -0,0 +1,111 @@
---
title: "Daily Trivials"
date: 2019-08-02T11:03:54+08:00
draft: false
---
> 其实日常的工作和学习中出现问题最多的倒不是一些比较大的课题都是一些小问题小毛病因此最多的是troubltshooting和tips。
这篇文章讲用于记录我日常遇到的一些问题的解决和一些小的hacks。话不多说let's begin.
- nginx测试配置是否正确
The -c flag indicates a certain configuration file will follow; the -t flag tells Nginx to test our configuration.
`nginx -c /etc/nginx/nginx.conf -t`
- tigervnc-server 无法启动问题
Fatal server error:
(EE) Cannot establish any listening sockets - Make sure an X server isn't already running(EE)
<!--more-->
```
touch /tmp/.X11-unix/X1
chmod 777 /tmp/.X11-unix/X1
```
- mac 移除外挂硬盘
`diskutil unmountDisk force /Volumes/DISK_NAME`
- mac安装kafka
```
brew cask install homebrew/cask-versions/java8
brew install kafka
```
To have launchd start kafka now and restart at login:
`brew services start kafka`
Or, if you dont want/need a background service you can just run:
`zookeeper-server-start /usr/local/etc/kafka/zookeeper.properties && kafka-server-start /usr/local/etc/kafka/server.properties`
- 误删macOS唯一一个administrator的用户admin组不小心使用了`sudo dseditgroup -o edit -a $(whoami) -t user admin` 本意只是想从wheel组里面删掉命令把系统上唯一一个具有超级管理员权限的用户的组权限给删了导致使用sudo命令会提示 `$(user) not in sudoers file, this incident will be reported....`
重启开机按住或者按几下cmd+r进入recovery mode然后选好语言进入安装系统的界面打开disk utility 检查下叫做 Macintosh HD 的一块硬盘又没有mount没有的话用工具栏的mount命按钮挂载然后关闭disk utility可以看到menu bar上的二级菜单里面找到terminal默认是root用户打开之后进入 `/Volumes/Machintosh HD/` 使用vim打开 `etc/sudoers` 文件,找到`%admin ALL=(ALL) ALL`这一行,在下面一行添加上错误删除的用户 `${username} ALL=(ALL) ALL` 保存退出重启即可找回sudo权限。
然后别忘了通过 `/usr/sbin/dseditgroup -o edit -a $(whoami) -t user admin` 来重新添加自己到超级管理员组,不然系统更新或是软件更新都会没法直接通过输密码来获取超级管理员权限。另外,就是下面这个命令让我失去了这一切:`sudo dseditgroup -o edit -a `whoami` -t user admin`
- 编译mac app时命令出错提示`xcode-select: error: tool 'xcodebuild' requires Xcode, but active developer directory '/Library/Developer/CommandLineTools' is a command line tools instance`
`sudo xcode-select -s /Applications/Xcode.app/Contents/Developer`
- Xcode tools install
`xcode-select —install`
- Ffmpeg 下载m3u8视频流
`./ffmpeg -i {src}.m3u8 -c copy {dst}.mp4`
- 使用vim或者nano打开`crontab -e`
```
# Specify nano as the editor for crontab file
export VISUAL=nano; crontab -e
# Specify vim as the editor for crontab file
export VISUAL=vim; crontab -e
```
- 生成 ssh rsa key
`ssh-keygen -t rsa -C “{your email}” -b 4096`
- CentOS 安装Erlang
从Erlang solutions下载rpm文件 [Erlang Solutions](https://www.erlang-solutions.com/resources/download.html)
```
# install dependencies
yum install -y wxBase
yum install -y wxGTK
yum install -y wxGTK-gl
yum -y install -y unixODBC
yum -y install -y openssl-devel
rpm -ivh {els-erlang.rpm}
```
- macOS 下通过brew安装的erlang 没有 man文档
`export MANPATH='/usr/local/opt/erlang/lib/erlang/man'` 添加到`~/.zshrc`中也取决于用的是zsh还是bash~/.bash_profile
- brew使用link时报错
`sudo chown -R $USER:admin /usr/local/share` should set the correct ownership and group for all files and directories below and including /usr/local/share
- mac 安装MySQL
```
brew install mysql
brew install services
mysqladmin -u root password welcome
brew services start mysql
```
- 注销vultr账号
[Log In - Vultr.com](https://my.vultr.com/billing/cancel/)
- 查看服务器的进程状态/IO数据/网络实时流量的三个工具推荐htop/iostat/nload

View File

@@ -1,38 +0,0 @@
---
title: Hello World
---
Welcome to [Hexo](https://hexo.io/)! This is your very first post. Check [documentation](https://hexo.io/docs/) for more info. If you get any problems when using Hexo, you can find the answer in [troubleshooting](https://hexo.io/docs/troubleshooting.html) or you can ask me on [GitHub](https://github.com/hexojs/hexo/issues).
## Quick Start
### Create a new post
``` bash
$ hexo new "My New Post"
```
More info: [Writing](https://hexo.io/docs/writing.html)
### Run server
``` bash
$ hexo server
```
More info: [Server](https://hexo.io/docs/server.html)
### Generate static files
``` bash
$ hexo generate
```
More info: [Generating](https://hexo.io/docs/generating.html)
### Deploy to remote sites
``` bash
$ hexo deploy
```
More info: [Deployment](https://hexo.io/docs/one-command-deployment.html)

View File

@@ -0,0 +1,128 @@
---
title: "Manage python project configurations"
date: 2019-05-03T11:02:09+08:00
draft: false
---
When working on a python project I've been thinking how to manage the configurations in an elegant way, just like the 'maven-way'(use a placeholder and replace them when packaging). Here are some points I care about:
1. separate development and production configs
2. easy to use, no need to include third party packages
3. safe, will not be committed to git repo by mistake
4. out-of-box, no need to modify the code to run on production or development environment
<!--more-->
As I have such requirements, after researching I finally choose this way to handle it.
The structure looks like:
![](https://files.d0zingcat.xyz/blog/posts/manage-python-configs/config-module.png)
*base_config.py*
```python
#!/usr/bin/env python
# -*- coding:utf-8 -*-
class Config(object):
_DEBUG = False
_PROD = False
def __getitem__(self, key):
return self.__getattribute__(key)
common_1 = '1'
common_2 = '2'
common_3 = '3'
common_4 = '4'
common_5 = '5'
```
*config_dev.py*
```python
#!/usr/bin/env python
# -*- coding:utf-8 -*-
from .base_config import Config
class DevelopmentConfig(Config):
_DEBUG = True
customized_1 = '1111'
customized_2 = 'two'
```
*config_prod.py*
```python
#!/usr/bin/env python
# -*- coding:utf-8 -*-
from .base_config import Config
class ProductionConfig(Config):
_PROD = True
customized_1 = 'oneoneone'
customized_2 = '2'
```
*config.py*
```python
#!/usr/bin/env python
# -*- coding:utf-8 -*-
import os
from .config_dev import DevelopmentConfig
from .config_prod import ProductionConfig
mappings = {
'development': DevelopmentConfig,
'production': ProductionConfig,
'default': DevelopmentConfig
}
MY_ENV = os.environ.get('MY_ENV', 'default').lower()
config = mappings[MY_ENV]()
```
*__init__.py*
this file is only to declare: the costconfig is a module
*from other file*
```python
from costconfig.config import config as Config
```
In the files you need to use config you just need to import it.
As you can see, the config file is key point. When entering the main scope, it will try to get environment 'MY_ENV'('default' by default and will use dev config) and get related config class according to this env variable, by which we can easily export an environment 'MY_ENV' to production on the server then the program will use production config. In this way, we can just run the same code without modifying even one line of the codes on server or on local development environment, just modify the configs and everything works fine.
In addition, there are still three steps you have to take.
1. add this filename into .gitignore, for me is: `*config/*_prod.*` to ignore the production config, in this case we will never have to worry about commit the production config to git repo(e.g. Github)
2. export MY_ENV on server, for me is add `export MY_ENV=production` to the end of ~/.bshrc
3. if you use a crontab to execute your script, you will find the program fails. That's because the crontab env is not safe like your bash, namely it will not execute your ~/.bashrc file first before executing. So, you have to declare in the crontab, just use `crontab -e` to edit the crontab and add `MY_ENV=production` at the first line.
Reference:
[Better Cron env and shell control with the SHELL variable](https://raymii.org/s/tutorials/Better_cron_env_and_shell_control_with_the_SHELL_variale.html)
[Timesaving crontab Tips](https://krisjordan.com/blog/2013/11/04/timesaving-crontab-tips)
[Linux Environment Variables](https://codeburst.io/linux-environment-variables-53cea0245dc9)
[Scheduling Cron Jobs with Crontab](https://linuxize.com/post/scheduling-cron-jobs-with-crontab/)
[how-can-i-run-a-cron-command-with-existing-environmental-variables](https://unix.stackexchange.com/questions/27289/how-can-i-run-a-cron-command-with-existing-environmental-variables)
[where-can-i-set-environment-variables-that-crontab-will-use](https://stackoverflow.com/questions/2229825/where-can-i-set-environment-variables-that-crontab-will-use)

View File

@@ -0,0 +1,216 @@
---
title: "Manjaro Taste"
date: 2019-10-21T13:29:25+08:00
draft: false
---
> 之前有尝试过Arch Linux但是因为arch本身折腾的属性实在是太过浓重什么都需要自己进行配置因此就“浅尝辄止”了。
> 最近在看到了这条信息:[Manjaro Linux的两项大胆举措](https://www.debian.cn/archives/3430) 发现的重点有四个1. 基于Arch Linux 2. 商业公司驱动 3. 使用了不开源但是兼容性更好的FreeOffice 4. 预装了N卡驱动免折腾 。在稍作查询之后我发现Manjaro对硬件兼容性的支持也很好联想到最近使用Ubuntu 19.04的卡顿感尤其是SSD下开机速度基本是2分钟旋即决定尝试一下这个新鲜对我而言的系统。
<!--more-->参考 [First Steps](https://manjaro.org/support/firststeps/)以及google其实不看任何教程直接安装就可以千篇一律步骤很简单需要注意的是如果选择使用自定义分区的起码建两个分区`/efi``/`,不过如果我的建议可能是直接通过自动格式化 `Erase Disk`来进行自动分区,选择`With Hibernate`进行安装我的理解是这个分区会用于存储休眠的系统镜像会加速休眠。如果有需求的话比如自动连接Wi-Fi和启动VNC Server也可以在用户设置界面勾选 `autologin`因为这边不勾选的话后面我google折腾了很久还是没能设置成功自动登录后来直接放弃了重新安装了系统只为了这边的自动登录。 因为这台机器我是家用的pc放在家里就自己用也没啥隐私而且最关键的是我希望他能自动联网启动zerotier这样我就能走到哪里都直接通过内网连到这台机器ssh、vnc进行操作了这个依托于自动登录进系统这一步。
> 当进入系统后我会配置下列选项
### 初始化命令
```
sudo pacman -Syu
# install yay
sudo pacman -S --noconfirm vim git wget curl file gcc
git clone https://aur.archlinux.org/yay.git
cd yay
makepkg -si
# install zerotier
curl -s https://install.zerotier.com | sudo bash
yay -yS go
yay -c
yay -S ruby-irb
# install brew
sh -c "$(curl -fsSL https://raw.githubusercontent.com/Linuxbrew/install/master/install.sh)"
test -d ~/.linuxbrew && eval $(~/.linuxbrew/bin/brew shellenv)
test -d /home/linuxbrew/.linuxbrew && eval $(/home/linuxbrew/.linuxbrew/bin/brew shellenv)
test -r ~/.bash_profile && echo "eval \$($(brew --prefix)/bin/brew shellenv)" >>~/.bash_profile
echo "eval \$($(brew --prefix)/bin/brew shellenv)" >>~/.profile
# 如果输入brew提示没有找到对应的命令的话可以关闭终端重新打开试试基本是PATH的问题可以对应去搜索问题。
# 我发现我输入了brew install xxx 会有 Error: cannot load such file -- irb 的报错,因次需要下面的命令进行修复
brew vendor-install ruby
# brew 也能通过下面这个方式进行安装
#git clone https://github.com/Homebrew/brew ~/.linuxbrew/Homebrew
#mkdir ~/.linuxbrew/bin
#ln -s ../Homebrew/bin/brew ~/.linuxbrew/bin
#eval $(~/.linuxbrew/bin/brew shellenv)
#install rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
echo 'source $HOME/.cargo/env' >> ~/.profile
# install asdf
git clone https://github.com/asdf-vm/asdf.git ~/.asdf
cd ~/.asdf
git checkout "$(git describe --abbrev=0 --tags)"
echo -e '\n. $HOME/.asdf/asdf.sh' >> ~/.bashrc
echo -e '\n. $HOME/.asdf/completions/asdf.bash' >> ~/.bashrc
#enable sshd and zerotier-one
sudo systemctl start sshd
sudo systemctl enable sshd
sudo systemctl enable zerotier-one
```
另外关于上面的一些启动命令,默认都往`~/.bash_profile``~/.profile`写了,都没有往`~/.bashrc`里面写但其实每当新开一个terminal的时候调用的是`~/.bashrc`中的命令,对于我们而言最方便的是写到`~/.bashrc`中。关于这三个差别可以参考:
> Say, youd like to print some lengthy diagnostic information about your machine each time you login (load average, memory usage, current users, etc). You only want to see it on login, so you only want to place this in your .bash_profile. If you put it in your .bashrc, youd see it every time you open a new terminal window. [this](http://www.joshstaiger.org/archives/2005/07/bash_profile_vs.html)
> The .profile was the original profile configuration for the Bourne shell (a.k.a., sh). bash, being a Bourne compatible shell will read and use it. The .bash_profile on the other hand is only read by bash. It is intended for commands that are incompatible with the standard Bourne shell. [this](https://unix.stackexchange.com/questions/45684/what-is-the-difference-between-profile-and-bash-profile)
比较推荐的做法是直接通过 `echo '[[ -f ~/.bashrc ]] && . ~/.bashrc' >> ~/.bash_profile` 添加到`~/.bash_profile`中,这样我们自己写的脚本或是一些其他的启动脚本都可以默认都写在`~/.bashrc`这样对于登录或是未登录的shell都可以实现自动加载。
### VNC相关
安装Tiger Vnc
`sudo pacman -S tigervnc`
首次输入 `vncserver` 会提示输入密码也会询问是否设置view-only password这个根据实际需要设置就好。
然后更改 `~/.vnc/config` 输入根据实际情况填写如果输入了localhost则只监听127.0.0.1,无法远程连接上)
```
desktop=sandbox
geometry=1366x768
dpi=96
alwaysshared
```
然后更改 `~/.vnc/xstartup`
```
#!/bin/sh
unset SESSION_MANAGER
unset DBUS_SESSION_BUS_ADDRESS
exec startxfce4
```
然后重新使用 `vncserver` 启动vnc如果提示的是`1`则表示端口是5901 `:2`则表示端口是5902以此类推。然后就可以通过vnc 软件进行远程连接了。macOS可以使用[screens](https://edovia.com/en/screens-mac/)还挺好用的。有一个小hack是安装完了vncserver可以重启一下不然`:1` 是被占用的。
### 安装常用软件
我一般比较常用的软件有:
- docker
- neovim
- remmia(VNC viewer)
- liteide(golang)
### 国内加速Docker
- 如果通过阿里云镜像加速
可以到这边申请专属的加速链接https://cr.console.aliyun.com/cn-hangzhou/instances/mirrors
```
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["加速链接"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
```
- 如果使用的是腾讯云镜像加速
修改 `/etc/default/docker`,添加(原先应该都是默认注释掉的)`DOCKER_OPTS="--registry-mirror=https://mirror.ccs.tencentyun.com"`这么一行,然后使用 `sudo systemctl restart docker`重启docker即可生效。
### 笔记本合盖不休眠
这项仅适用于使用笔记本的情况我的笔电是Lenovo Y510P但是BIOS做过破解网卡白名单网卡从原生的Intel更换成了博通BCM 94322这也是我安装Manjaro的原因之一对硬件的兼容性支持更好我需要我的电脑长时间开机并且可以合上盖子息屏但是不睡眠所以需要做一个修改接通电源的情况下manjaro默认是不休眠的当然这个也许会变可以复合一下在setting-power里面可以修改
sudo 修改 `/etc/systemd/logind.conf` ,找到 `#HandleLidSwitch=suspend` ,改成 `HandleLidSwitch=ignore` 即可合盖不睡眠。
当然,关于其他的电源选项可以去 `Settings-Settings Manager-Power Manager` 检查一下其他的电源选项,比如 Display tab 中的 Blank after、Put to sleep after、Switch off after 分别就是屏幕的黑屏、睡眠、关闭时间,而 System tab 中的 System sleep mode 就意味着当系统不活跃多久之后进行什么操作,电池和接电源分别的处理是什么,对于我而言就是直接全部改成 Never因为我希望电脑一直不休眠/睡眠 方便我随时能连接上这台电脑,虽然可能可以配置下 wake on lan 但是就不继续折腾了,也不差这点电【大雾】),建议配置下 Critical Power, 改成 level 10% 操作选 Hibernate 这样就可以在只有10%的电量的时候直接休眠,以防数据丢失(这个下分区的时候选择的 With Hibernate 选项单独分出来的一个休眠分区就有用户之地啦。顺带提一下Sleep是睡眠其实电脑的数据还都保存在内存中电源还是对电源供电的。
### 关闭笔记本的蜂鸣器
manjaro默认是打开了笔记本的蜂鸣器的所以当比如没有文字但是按退格键的时候电脑会beep beep地叫这个其实不是笔记本的speaker的声音而是主板上的蜂鸣器发出的。晚上就比较恼人了所以我还是比较希望能够关闭这个声音的。
- 方法一
对于X applications 关闭蜂鸣器的方法很简单我假定既然是用manjaro就说明主要的使用场景是图形界面命令行里面输入`xset b off`即可。
- 方法二
可以一劳永逸地做这个配置网上其他方案是在开机的自动启动的脚本中加入一个rmmod命令把负责这块功能的内核模块移除掉但是我有看到arch wiki资料说会导致问题所以选择了下面这个方式
使用命令`sudo vim /etc/modprobe.d/nobeep.conf`新增一个过滤文件,添加内容`blacklist pcspkr`保存重启即可。
也可以直接通过命令`rmmod pcspkr`进行内核模块的移除,如果不想要重启的话。
主要参考了这边:
[How to disable beep tone in xfce when the delete button is pressed?](https://wiki.archlinux.org/index.php/PC_speaker#Disable_PC_Speaker)
[Kernel module](https://wiki.archlinux.org/index.php/Kernel_module#Using_kernel_command_line_2)
[[SOLVED] Disable PC Speaker when Backspace is pressed on Log In](https://forum.manjaro.org/t/solved-disable-pc-speaker-when-backspace-is-pressed-on-log-in/76538)
[Disable PC speaker beep (简体中文)](https://wiki.archlinux.org/index.php/Disable_PC_speaker_beep_(简体中文)#全局设置)
### 解决和windows 10 dual boot
我的电脑上加装了一块硬盘所以有两块SSD其中一块上面安装了windows10但是EFI分区在另一块硬盘上面当时安装了Ubuntu 19.04所以当我安装manjaro的时候直接把安装了Ubuntu的硬盘格式化了 所以这个启动分区就丢失了。当打开电脑的时候会自动进度manjaro手动唤出BIOS的启动选项的时候也并不能检测到Windows的启动选项。因此为了解决这个问题需要以下步骤
1. 制作Windows的usb启动镜像可以使用[Rufus](https://rufus.ie)镜像可以从这边下载https://share.weiyun.com/5BFi4gv
2. 使用usb启动如果是USB 3.0的U盘可以使用2.0的插口我的电脑检测不出来不确定会不会别人有同样的问题可以具体搜索不同型号的电脑的boot的快捷键比如Lenovo是F12。如果BIOS不支持快速选择的话那就需要进入具体的BIOS去调整启动盘的顺序这个可以具体去**Google**。
3. 进入安装界面之后按Shift+F10可以唤出Windows的安装命令行依次输入
```
bootrec /FixMBR
bootrec /FixBoot
bootrec /rebuildbcd
```
但是我使用上述命令(输入`bootrec /FixBoot`)进行修复的时候提示了“拒绝访问”,继续查询解决办法后得到一个解决方法(输入以下命令):
```
bootrec /FixMBR
bootsect /nt60 sys /mbr
bootrec /FixBoot
bcdboot c:\windows /s c:
bcdboot c:\windows /v
bcdedit /enum
```
然后发现bcd文件已经修复好但是步骤并没有结束。
4. 重启进入Windows系统默认就是进入Windows其实这个时候如果打开BIOS的boot options也可以看到有windows10 的选项),打开控制面板-电源选项-系统设定,去掉快速启动的选项。也可以参考这篇文章:[How To Disable Fast Startup in Windows 10](https://help.uaudio.com/hc/en-us/articles/213195423-How-To-Disable-Fast-Startup-in-Windows-10)
5. 然后可以重启一下进入Windows10也可以先通过boot option先进入manjaro然后再重启进入Windows 10不确定这一步的意义但是我是这么做的也许可以省略。
6. 重新回到Windows后以管理员命令打开命令指示符输入`bcdedit /set {bootmgr} path \EFI\manjaro\grubx64.efi`,参考:[Dual-boot Manjaro - Windows 10 - Step by Step](https://forum.manjaro.org/t/howto-dual-boot-manjaro-windows-10-step-by-step/52668)
7. 重启则会发现默认进入了manjaro再次重启boot option进入windows后再次重启就会发现进入了manjaro的GRUB的菜单然后就可以选择到底是windows还是manjaro了。
### 安装Cisco AnyConnect
网上比较流行的权重比较高的教程在这边:[[HowTo] Install the official Cisco AnyConnect VPN Client tarball using the AUR (UPDATED)](https://forum.manjaro.org/t/howto-install-the-official-cisco-anyconnect-vpn-client-tarball-using-the-aur-updated/96369) ,也可以找到这个[AUR](https://aur.archlinux.org/packages/cisco-anyconnect/)。但是实际操作的时候会提示错误因为缺少一个二进制的文件包但是实际上Cisco的这个包是私有的不开放下载的。但是当实际拿到了这个包之后我尝试编译还是一堆错误感兴趣的可以从[这边](https://b2.d0zingcat.xyz/file/21century/blog/attaches/anyconnect-linux64-4.6.02074-predeploy-k9.tar.gz)拿到包尝试一下教程。
我的做法是直接打开软件中心add remove software安装了openconnect然后可以在右下角的Wi-Fi 或者有线网络处VPN Connections-Configure VPN处新增一个any connect profile就好gateway中填写服务器{ip:port}。
### [添加终端代理快捷命令](https://lhalcyon.com/utility-mac-ss-proxy/)
可以往`~/.bashrc`中添加下列命令:
```
function setproxy() {
export {HTTP,HTTPS,FTP}_PROXY="http://127.0.0.1:7890" #也可以设置http代理
export ALL_PROXY=socks5://127.0.0.1:7891
}
function unsetproxy() {
unset {HTTP,HTTPS,FTP}_PROXY
unset ALL_PROXY
}
```
这样我们可以使用`setproxy`或是`unsetproxy`进行设置或者反设置终端代理,也可以通过 `curl -i https://ip.cn` 查看设置是否成功,也可以通过`curl https://www.google.com`进行检测是否切换代理成功因为比如一些软件会根据rule进行分流那么即便走了代理直接访问国内的网站也还是直连的或者软件先开好全局模式。当然端口需要根据实际进行修改比如我用的是[Clash](https://github.com/Dreamacro/clash)那么我的端口就是7890http/https和7891socks5
### zerotier 相关配置(待续)

View File

@@ -20,5 +20,6 @@ Gohugo是个非常高效的静态博客生成器而且是用我当时最痴
2. 本地初始化hexo`hexo init blog`
3. fork一个主题的分支因为我们需要自己对主题做一些更改而且又希望能够拉取到上游的更新这意味着我们需要维持repo的git目录且需要有自己的主题仓库。比如我就fork了[even](https://github.com/D0zingcat/hexo-theme-even)。
4. 进入目录`blog`之后clone主题比如`git clone https://github.com/D0zingcat/hexo-theme-even themes/even`, 如果是even的话还需要安装一个依赖 `npm install hexo-renderer-scss --save`,同时复制一份主题的配置文件出来到一个全新的配置文件 `cp themes/even/_config.yml.example themes/even/_config.yml`
5. 使用Github授权登陆Netlify并且授权blog那个repo的访问权限设置好部署命令 `hexo generate` 设置好域名CNAME指向这个netlify的域名其他的默认就会启用这个生成的功能只要那个repo有提交就会自动启动拉取数据并部署。
比较坑的是Netlify在拉取信息的时候如果主Repo中有repo的嵌套必须添加submodule但是submodule中不一定会追踪文件的更改。可以使用命令 `git submodule add -b master https://github.com/D0zingcat/hexo-theme-even themes/even` 进行添加子模块然后重新提交一下整个项目文件即可。submodule不会追踪子模块的更改换言之如果要对子模块进行更改那么需要进入到子模块提交文件修改之后再在主目录下进行提交这时子模块是作为一个特殊的文件引用160000 mode提交的。如果子模块添加错了可以参考[这个](https://stackoverflow.com/questions/1260748/how-do-i-remove-a-submodule)。但是有个比较坑的地方是even这个主题里面的`.gitignore`中有`_config.yml`所以自己自定义的配置文件更改都默认被忽略掉了我找了半天的问题就是不明白为什么submodule会没法追踪文件的修改记得从中删除之后[重新提交](https://blog.csdn.net/yingpaixiaochuan/article/details/53729446)配置文件不然Netlify也没法拿到正确的配置。同时Even因为有gh-pages的分支Netlify会自动拉取这个分支然后就报错了具体原因未知也懒得追溯。手动[删除master之外的分支](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-and-deleting-branches-within-your-repository)之后重新deploy得到解决。

View File

@@ -0,0 +1,26 @@
---
title: "Setting Shadowsocks Libev With Obfs"
date: 2019-02-05T20:58:41+08:00
draft: false
---
> 今天是2019年2月5日农历大年初一。但是今天发现昨天icloud photos加载不出来不是空穴来风应该是GFW升级了之类导致的起码在目前的网络环境下江苏电信是电脑和手机全线崩不能访问Google的。
所以第一时间考虑如何修复这个问题。但是对于我而言翻墙成本有以下几个需要考虑的点:
1. 多用户多端口(小伙伴们合租)
2. 上网习惯了PAC模式自动切换代理开关
3. PAC模式可以随时新增网址算是第2点的补充这个列表需要不停地update
<!--more-->
但是先应急使用了Cisco Any connectOpen Connect Server也就是ocserv现在不比以前需要shell安装配置一大堆的东西直接[docker镜像](https://github.com/TommyLau/docker-ocserv)跑一下即可。
docker run --name ocserv --privileged -p 8080:443 -p 8080:443/udp -d tommylau/ocserv
也就是内部端口443映射到外部8080主机没有配置iptables然后果然在两个环境下都可以正常翻墙了但是anyconnect有个我最不喜欢的地方就是全局代理即便是通过CIDR进行智能路由也存在着IP错杀或是不准确的情况更新起来也必须要改服务端很麻烦。所以就先作为一个应急手段吧。脑袋里就想起来[破娃酱](https://github.com/breakwa11)之前有发过一篇文章是关于识别ss的流量的然后引发了巨大的争议和批判而且还导致他删除了自己大部分的repo当然早期他的shadowsocksr-windows不开源也引发过很大的争议但是没这次这么夸张其实这个很不应该所以就连带地想起来破娃酱的shadowsockr-windows的一个特性就是支持混淆当然那篇文章之后前因后果就不谈了感兴趣的自己去搜索吧[https://github.com/madeye](madeye)也发过文章说明了一些情况,也给了解决方案--混淆。所以这边我想先试着通过插件进行混淆流量能不能成功翻墙。
*闹了个乌龙,可能是主机程序原因高位端口被占用冲突了所以不能访问,改个端口问题得到了解决。*不过还是能继续捣鼓下去的这边也记录下来。初步尝试v2ray-plugin开启80端口混淆配上shadowsocksx-ng是可以正常访问的。不过距离我的tls混淆以及多账户同端口共用通过ng转发的设想还有点远所以目前先凑活用下去。等我下次捣鼓https混淆的时候我就会完整地把过程记录下来了。

View File

@@ -0,0 +1,54 @@
---
title: "Upgrade Git on Mac"
date: 2018-07-28T08:54:04+08:00
draft: false
---
Recently, I've tried to use GnuPG signing my commits, but some wierd phenomemon appear. To eliminate other factors that affect the problem, I've tried to upgrade my Git release on Mac OS X.
1. check git version and back up original git version
$ git --version
git version 2.10.1 (Apple Git-78)
$ sudo mv /usr/bin/git /usr/bin/git-apple
2. Update homebrew(already installed supposed)
$ brew update && brew upgrade
If you've not heard [Homebrew](https://brew.sh/) or not installed before, install brew firstly.
<!--more-->
$ /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
3. Install Git with Homebrew
$ brew install git
4. (if not linked) Fix symblic link
$ brew link --force git
If symbolic link is already linked, this won't cause any harm.
5. Close Terminal and reopen then check version
$ git --version
And you shall see...
git version 2.18.0
Nice! We are safe now! And next time you just need to...
$ brew update && brew upgrade
And this script will automatically upgrade all the softwares installed by brew.
*Ref*:
[How to upgrade Git (Mac OSX)](https://medium.com/@katopz/how-to-upgrade-git-ff00ea12be18)

View File

@@ -0,0 +1,90 @@
---
title: "Use Gpg Signing for Github"
date: 2018-07-28T20:39:40+08:00
draft: false
---
A few days ago, I notiecd that when creating a new file or delete that on Github, on the right-hand side, the commits will display a "Verified" sign. Just look like following:
![](https://blog-d0zingcat.oss-cn-hangzhou.aliyuncs.com/gpg-sign.png)
Looks really cool, isn't it? So, I've tried to make this thing appear on every commit(especially on local pc/laptop, using git client) I've submit, but not soon I've encountered a lot and a lot troubles(when your os is OS X). Here is an instruction of how to turn this feature on(which means signing commits with G(nu)PG).
?
<!--more-->
So, let me guide you how to make this "Verified" sign come out.
- install [brew](https://brew.sh/) if not
`/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"`
- install gpg and pinentry
`brew update && brew install gpg && brew install pinentry-mac && echo "pinentry-program /usr/local/bin/pinentry-mac" > ~/.gnupg/gpg-agent.conf && killall gpg-agent`
- use `echo "test" | gpg --clearsign` to test of gpg is installed correctly.
- generate gpg key and import into Github(just use the new key for gpg sign by ignoring any existed gpg key)
`gpg --default-new-key-algo rsa4096 --gen-key`
> When asked to enter your email address, ensure that you enter the verified email address for your GitHub account.
Now, the terminal displays like following:
```
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBFtUemQBEADE2tGSTjXyd6zqx6WIRlB+EosPBV5WPgLUCSFiHgPLpjY59Kek
0WwdBX+ippgAfFNfdiYWJLs0oddd8zV/aU70ZMSUErCS9oyWem/HEpLO383
CcSrvN2YMWuAviBcsrbbUY6uWBRbeHqyNWir19nFOBURXEk9JomHefI3JYcxv6xA
wZPz2mEEn+HfneoqAb9oFq5IKqMkteFQ5uBTgUfG5V7QXjxr2qbjDg8i3wARAQAB
tCBMZWUgVGFuZyA8ZDB6aW5nY2F0QG91dGxvb2suY29tPokCTgQTAQgAOBYhBBHd
N/0HszSZ2DAZ4VoovvfEXbLFBQJbVHpkAhsDBQsJCAcCBhUKCQgLAgQWAgMBAh4B
AheAAAoJEFoovvfEXbLFsssT2oEi5RgpO8hwCkRSQTVSCIgUSH
TBkEh+Sewy6/DgALbv+KHkox/oG44BAnI4/9jxK/p2HUNE1rP2VnWw50kZyOcGcA
d1nt63jLqxyURq+7h6MHrw0D81L3U72/4KHnK5JcBSpOYrDOpkb+LQVqD1hYKpoD
oMvd57qZECuiXHKLe82YlL8FaWhILoaG6jjEn90w8n4VZmvOyI39FaBgALTL4nH0
DwO8uQINBFtUemQBEAC1RGuRYJiwTv//9wThSiMKRO0xZUUWI/kQaDqYSExnaRSV
x51bp7fD5EEJCE8w8o6oLQhvPrpRPsssurUSeAOwfOHMRhUaU1XyR4O2OW
yu3pIsW1/2la18XMbBIKo3Z4wLFL+XWznB3wHsYtHcJQFqUVetF85DB7ILAvsQAR
AQABiQI2BBgBCAAgFiEEEd03/QezNJnYMBnhWii+98RdssUFAltUemQCGwwACgkQ
Wii+98RdssVugg/+KFtx74+ip/IPV/bvssssL7JfFJO0QzOUc=
=BGUu
-----END PGP PUBLIC KEY BLOCK-----
```
Copy your GPG key, beginning with -----BEGIN PGP PUBLIC KEY BLOCK----- and ending with -----END PGP PUBLIC KEY BLOCK----- and paste into Github-Settings-SSH and GPG keys-new GPG key.
- set up local git client
gpg --list-secret-keys --keyid-format SHORT | grep ^sec
when the command entered, the terminal displays like following:
sec rsa4096/3AA5C343 2018-07-22 [SC]
Which is need to denote, the keyid format should be SHORT instead of LONG, and this step is different from Github official instruction. You should copy '3AA5C343' this part and we will use this key:
`git config --global user.signingkey 3AA5C343 && git config --global gpg.program $(which gpg)`
- test if git know how to sign your commit
`mkdir test && cd test && git init && touch a && git add . && git commit -S -m "test" && cd .. && rm -rf test`
if no error shows, that means you are safe now and from now on you can use `-S` argument to order git to sign with Gpg. In addition, to sign all commits by default in any local repository on your computer, run `git config --global commit.gpgsign true`.
*Ref:*
- [Signing commits with GPG](https://help.github.com/articles/signing-commits-with-gpg/)
- [gpg failed to sign the data fatal: failed to write commit object [Git 2.10.0]](https://stackoverflow.com/questions/39494631/gpg-failed-to-sign-the-data-fatal-failed-to-write-commit-object-git-2-10-0/39626266)
- [GPG and git on macOS](https://gist.github.com/danieleggert/b029d44d4a54b328c0bac65d46ba4c65)
- [Generating a new GPG key](https://help.github.com/articles/generating-a-new-gpg-key/)
- [Telling Git about your GPG key](https://help.github.com/articles/telling-git-about-your-gpg-key/)
- [The git error: “gpg failed to sign the data”](https://ducfilan.wordpress.com/2017/03/10/the-git-error-gpg-failed-to-sign-the-data/comment-page-1/)
- [Signing Commits in Git](https://nathanielhoag.com/blog/2016/09/05/signing-commits-in-git/)

11
source/about.md Normal file
View File

@@ -0,0 +1,11 @@
> 做平凡的事,成为一个平凡的人 。
>
> 不戚戚于贫贱,不汲汲于富贵。
>
> 心有猛虎,细嗅蔷薇。
>
> 面向阳光,做一个开朗自信的人。
你好👋我的名字是d0zingcat目前就职于[莉莉丝游戏](https://www.lilithgames.com/cn/?nlr=1)是个搬砖的码农欢迎小伙伴拿简历来砸我。我会使用PythonGolangErlang进行编程目前沉醉于Elixir无法自拔也有计划学习一下Rust、Lua、Js。也许在未来的某天我会试着更新这个"关于我"页面。

10
source/links.md Normal file
View File

@@ -0,0 +1,10 @@
> Here are some blog recommendations for you.
[Droomo](https://droomo.top)
[Windranger](http://windranger.wang)
[Xingo](https://blog.xingoxu.com)

111
source/resources.md Normal file
View File

@@ -0,0 +1,111 @@
Recently, I've been...
### Reading
| Name | Progress |
| --- | --- |
| [Python教程][1]| 70% |
| [The Go Programming Language][2] | 55% |
| [Erlang程序设计第2版][3]| 50% |
| [Erlang趣学指南][4]| 30%|
| [月亮和六便士](https://zh.wikipedia.org/wiki/%E6%9C%88%E4%BA%AE%E5%92%8C%E5%85%AD%E4%BE%BF%E5%A3%AB)|&#10003; |
| [解忧杂货店](https://book.douban.com/subject/25862578/)| &#10003; |
| [三體](https://zh.wikipedia.org/wiki/%E4%B8%89%E4%BD%93_(%E5%B0%8F%E8%AF%B4))| &#10003;|
| [一个叫欧维的男人决定去死](https://book.douban.com/subject/26672693/)| &#10003; |
| [皮囊](https://book.douban.com/subject/26278687/)| &#10003; |
| [孩子们的诗](https://book.douban.com/subject/27133274/)| &#10003; |
| [無聲告白](https://zh.wikipedia.org/wiki/%E6%97%A0%E5%A3%B0%E5%91%8A%E7%99%BD)| &#10003; |
| [小王子](https://zh.wikipedia.org/wiki/%E5%B0%8F%E7%8E%8B%E5%AD%90)| &#10003; |
| [人生海海](https://book.douban.com/subject/30475767/)| &#10003; |
| [活着][13]| &#10003; |
| [许三观卖血记][14]| &#10003; |
| [兄弟](https://book.douban.com/subject/4882133/)| &#10003; |
| [飄][5] | &#10003; |
| [駱駝祥子][6] | &#10003; |
| [大亨小傳][7] | &#10003; |
| [Nineteen Eighty-Four][8] | &#10003; |
| [穆斯林的葬禮][9] | &#10003; |
| [Black Beauty][10] | &#10003; |
| [Twenty Thousand Leagues Under the Sea][11] | &#10003; |
| [King Arthur][12] | &#10003; |
| [嫌疑人X的献身][28]| &#10003; |
| [Vater und Sohn][32]| &#10003; |
| [格林童话](https://book.douban.com/subject/1880823/)| &#10003; |
| [给孩子读诗](https://book.douban.com/subject/26682576/)| &#10003; |
| [谁动了我的奶酪?](https://book.douban.com/subject/2225735/)| &#10003; |
| [习惯的力量](https://book.douban.com/subject/20507212/)| &#10003; |
| [时间的形状](https://book.douban.com/subject/26992254/)| &#10003; |
| [你一定要努力,但千万别着急](https://book.douban.com/subject/26786048/)| &#10003; |
| [拆掉思维里的墙](https://book.douban.com/subject/6789999/)| &#10003; |
| [鱼羊野史•第4卷](https://book.douban.com/subject/26592869/)| &#10003; |
| [雪人](https://book.douban.com/subject/26729776/)| &#10003; |
### Watching
| Name | Progress |
| --- | --- |
| [罗小黑战记](https://baike.baidu.com/item/罗小黑战记/22902442) |
| [哪吒之魔童降世](https://movie.douban.com/subject/26794435//) |
| [中国机长](https://movie.douban.com/subject/30295905/) |
| [The lion king](https://www.rottentomatoes.com/m/the_lion_king_2019) | &#10003; |
| [高级语言程序设计PythonCAP][15] | 50% |
| [TED Talk of the Week: Nature. Beauty. Gratitude][16] | &#10003; |
| [Schindler's List][17]| &#10003; |
| [V for Vendetta][18] | &#10003; |
| [The Curious Case of Benjamin Button][19] | &#10003; |
| [Se7en][20] | &#10003; |
| [Gone Girl][21] | &#10003; |
| [The Princess Diaries][22] | &#10003; |
| [Fight Club][23]| &#10003; |
| [How to Train Your Dragon: The Hidden World][24] | &#10003; |
| [AVENGERS: ENDGAME][25] | &#10003; |
| [Game of Thrones][26] | &#10003; |
| [Andhadhun 2018][27] | &#10003; |
| [The Battle of Chernobyl][29] | &#10003; |
| [Silicon Valley][30] | &#10003; |
| [Chernobyl][31] | &#10003; |
### Playing
| Name | Progress |
| --- | --- |
| [Dota Underlords](https://store.steampowered.com/app/1046930/Dota_Underlords/) | Pending |
| [Stardew Valley][33] | Pending |
| [AFK Arena][34] | / |
[1]: https://www.liaoxuefeng.com/wiki/1016959663602400
[2]: http://www.gopl.io/
[3]: http://www.ituring.com.cn/book/1264
[4]: https://www.epubit.com/book/detail/27325;jsessionid=C1041437DF3628719D8D3CB8B0EC19F3
[5]: https://zh.wikipedia.org/zh/%E9%A3%84
[6]: https://zh.wikipedia.org/wiki/%E9%AA%86%E9%A9%BC%E7%A5%A5%E5%AD%90
[7]: https://zh.wikipedia.org/wiki/%E4%BA%86%E4%B8%8D%E8%B5%B7%E7%9A%84%E7%9B%96%E8%8C%A8%E6%AF%94
[8]: https://en.wikipedia.org/wiki/Nineteen_Eighty-Four
[9]: https://zh.wikipedia.org/wiki/%E7%A9%86%E6%96%AF%E6%9E%97%E7%9A%84%E8%91%AC%E7%A4%BC
[10]: https://en.wikipedia.org/wiki/Black_Beauty
[11]: https://en.wikipedia.org/wiki/Twenty_Thousand_Leagues_Under_the_Sea
[12]: https://en.wikipedia.org/wiki/King_Arthur
[13]: https://book.douban.com/subject/1082154/
[14]: https://book.douban.com/subject/1029791/
[15]: https://www.icourse163.org/course/HIT-1001616002
[16]: https://www.goodnet.org/articles/ted-talk-week-nature-beauty-gratitude
[17]: https://www.imdb.com/title/tt0108052/
[18]: https://www.imdb.com/title/tt0434409/
[19]: https://www.imdb.com/title/tt0421715/
[20]: https://www.imdb.com/title/tt0114369/?ref_=nm_knf_t1
[21]: https://www.imdb.com/title/tt2267998/
[22]: https://www.imdb.com/title/tt0247638/
[23]: https://www.imdb.com/title/tt0137523/
[24]: https://www.imdb.com/title/tt2386490/?ref_=nv_sr_1?ref_=nv_sr_1
[25]: https://www.rottentomatoes.com/m/avengers_endgame
[26]: https://en.wikipedia.org/wiki/Game_of_Thrones
[27]: https://www.imdb.com/title/tt8108198/
[28]: https://book.douban.com/subject/3211779/
[29]: https://www.imdb.com/title/tt1832484/
[30]: https://www.imdb.com/title/tt2575988/?ref_=nv_sr_1?ref_=nv_sr_1
[31]: https://www.imdb.com/title/tt7366338/
[32]: https://de.wikipedia.org/wiki/Vater_und_Sohn
[33]: https://www.stardewvalley.net/
[34]: https://www.afkarena.com/

31
source/tools.md Normal file
View File

@@ -0,0 +1,31 @@
> 下面是一些我使用(过)的强大的工具或是网站或是任何我觉得有价值分享出来的东西。
## 生活
### 会员、订阅
- [逮虾户](https://daixiahu.co/#/buses):公益合租平台
## 系统
### Windows
- [Rufus](https://rufus.ie) USB启动镜像制作工具
- [Nox Player](https://www.bignox.com)支持宏的Android 模拟器
### macOS/Linux
- [ASDF](https://asdf-vm.com/#/core-manage-asdf-vm):编程语言虚拟环境
- [Bear](https://bear.app)跨平台个人笔记和知识管理支持Markdown
- [Agenda](https://apps.apple.com/us/app/agenda/id1287445660?mt=12):工作笔记(时间管理)软件
- [Eudic Lite](https://apps.apple.com/us/app/eudic-%E6%AC%A7%E8%B7%AF%E8%AF%8D%E5%85%B8/id434350458?mt=12):跨平台的词典,可以支持生词本的导出
- [autojump](https://github.com/wting/autojump):可以快速自动跳转到目录的命令
### iOS
- [Infuse](https://firecore.com/infuse):视频播放