3. Column Family Database
주종면 : Oracle -> MongoDB로 전향한 전문가
Murmur3Partitioner calculator
http://www.geroba.com/cassandra/cassandra-token-calculator/
6
-9223372036854775808
n-6148914691236517206
n-3074457345618258604
n-2
n3074457345618258600
n6148914691236517202
원래 컬럼패밀리는
① 정렬 ② →
↓ rowkey1 c1 ...
v1 ...
rowkey2
rowkey3
카산드라는
Rowkey로 정렬할 수 없어 (hash)
컬럼으로 정렬한다.
카산드라 실습
./start-all.sh
./bin/nodetool -h s1 status
쉘
1. Cassandra-CLI Shell : 3.X 지원중단
2. CQL Shell
[CLI]
./bin/cassandra-cli
[CQL]
[cas@s2 cassandra]$ ./bin/cqlsh
Connected to Test Cluster at localhost:9160.
[cqlsh 4.1.1 | Cassandra 2.0.13 | CQL spec 3.1.1 | Thrift protocol 19.39.0]
Use HELP for help.
cqlsh> use test1;
cqlsh:test1> drop keyspace testdb;
cqlsh:test1> CREATE KEYSPACE testdb with REPLICATION={'class':'SimpleStrategy', 'replication_factor': 3};
cqlsh:test1> use testdb;
cqlsh:testdb>
cqlsh:testdb>
cqlsh:testdb> CREATE TABLE users (
... userid varchar,
... username varchar,
... password varchar,
... email varchar,
... PRIMARY KEY (userid)
... );
cqlsh:testdb> insert into users (userid, username, password, email)
... values('gdhong', '홍길동', '1111', 'gdhong@test.com');
cqlsh:testdb> select * from users;
userid | email | password | username
--------+-----------------+----------+----------
gdhong | gdhong@test.com | 1111 | 홍길동
(1 rows)
cqlsh:testdb>
cqlsh:testdb> CREATE TABLE employees1 (
... empid int, deptid int, empname text,
... PRIMARY KEY (deptid, empid)
... );
cqlsh:testdb> CREATE TABLE employees2 (
... empid int, deptid int, empname text,
... PRIMARY KEY (empid, deptid)
... );
cqlsh:testdb>
cqlsh:testdb>
cqlsh:testdb> INSERT INTO employees1 (empid, deptid, empname) VALUES (10001, 1, '홍길동');
cqlsh:testdb> INSERT INTO employees1 (empid, deptid, empname) VALUES (10002, 1, '박문수');
cqlsh:testdb> INSERT INTO employees1 (empid, deptid, empname) VALUES (10003, 2, '이몽룡');
cqlsh:testdb> INSERT INTO employees1 (empid, deptid, empname) VALUES (10004, 1, '변학도');
cqlsh:testdb> INSERT INTO employees1 (empid, deptid, empname) VALUES (10005, 3, '성춘향');
cqlsh:testdb> INSERT INTO employees1 (empid, deptid, empname) VALUES (10006, 3, '갑돌이');
cqlsh:testdb>
cqlsh:testdb> INSERT INTO employees2 (empid, deptid, empname) VALUES (10001, 1, '홍길동');
cqlsh:testdb> INSERT INTO employees2 (empid, deptid, empname) VALUES (10002, 1, '박문수');
cqlsh:testdb> INSERT INTO employees2 (empid, deptid, empname) VALUES (10003, 2, '이몽룡');
cqlsh:testdb> INSERT INTO employees2 (empid, deptid, empname) VALUES (10004, 1, '변학도');
cqlsh:testdb> INSERT INTO employees2 (empid, deptid, empname) VALUES (10005, 3, '성춘향');
cqlsh:testdb> INSERT INTO employees2 (empid, deptid, empname) VALUES (10006, 3, '갑돌이');
cqlsh:testdb>
cqlsh:testdb>
cqlsh:testdb>
cqlsh:testdb> select * from employees1;
deptid | empid | empname
--------+-------+---------
1 | 10001 | 홍길동
1 | 10002 | 박문수
1 | 10004 | 변학도
2 | 10003 | 이몽룡
3 | 10005 | 성춘향
3 | 10006 | 갑돌이
(6 rows)
cqlsh:testdb> select * from employees2;
empid | deptid | empname
-------+--------+---------
10001 | 1 | 홍길동
10002 | 1 | 박문수
10003 | 2 | 이몽룡
10006 | 3 | 갑돌이
10004 | 1 | 변학도
10005 | 3 | 성춘향
(6 rows)
cqlsh:testdb>
============================================================
[default@testdb] [cas@s1 cassandra]$ ./bin/cassandra-cli
Connected to: "Test Cluster" on 127.0.0.1/9160
Welcome to Cassandra CLI version 2.0.13
The CLI is deprecated and will be removed in Cassandra 3.0. Consider migrating to cqlsh.
CQL is fully backwards compatible with Thrift data; see http://www.datastax.com/dev/blog/thrift-to-cql3
Type 'help;' or '?' for help.
Type 'quit;' or 'exit;' to quit.
[default@unknown] use testdb;
Authenticated to keyspace: testdb
[default@testdb] list employees1;
Using default limit of 100
Using default cell limit of 100
-------------------
RowKey: 1
=> (name=10001:, value=, timestamp=1485157888772000)
=> (name=10001:empname, value=ed998deab8b8eb8f99, timestamp=1485157888772000)
=> (name=10002:, value=, timestamp=1485157888782000)
=> (name=10002:empname, value=ebb095ebacb8ec8898, timestamp=1485157888782000)
=> (name=10004:, value=, timestamp=1485157888812000)
=> (name=10004:empname, value=ebb380ed9599eb8f84, timestamp=1485157888812000)
-------------------
RowKey: 2
=> (name=10003:, value=, timestamp=1485157888794000)
=> (name=10003:empname, value=ec9db4ebaabdeba3a1, timestamp=1485157888794000)
-------------------
RowKey: 3
=> (name=10005:, value=, timestamp=1485157888820000)
=> (name=10005:empname, value=ec84b1ecb698ed96a5, timestamp=1485157888820000)
=> (name=10006:, value=, timestamp=1485157888825000)
=> (name=10006:empname, value=eab091eb8f8cec9db4, timestamp=1485157888825000)
3 Rows Returned.
Elapsed time: 86 msec(s).
employees1
Rowkey : deptid
컬럼 : empid
으로 정렬됨
-------------------
cqlsh:testdb> CREATE TABLE employees3 (
... empid int, deptid int, empname text,
... email text, tel text,
... PRIMARY KEY ((deptid, empid), empname)
... );
cqlsh:testdb> INSERT INTO employees3 (empid, deptid, empname, email, tel) VALUES (10001, 1, '홍길동', 'gdhong@opensg.net', '010-222-3333');
cqlsh:testdb> INSERT INTO employees3 (empid, deptid, empname, email, tel) VALUES (10002, 1, '박문수', 'mspark@opensg.net','010-777-7778');
cqlsh:testdb>
[default@testdb] list employees3;
Using default limit of 100
Using default cell limit of 100
-------------------
RowKey: 1:10002
=> (name=박문수:, value=, timestamp=1485159139959000)
=> (name=박문수:email, value=6d737061726b406f70656e73672e6e6574, timestamp=1485159139959000)
=> (name=박문수:tel, value=3031302d3737372d37373738, timestamp=1485159139959000)
-------------------
RowKey: 1:10001
=> (name=홍길동:, value=, timestamp=1485159139953000)
=> (name=홍길동:email, value=6764686f6e67406f70656e73672e6e6574, timestamp=1485159139953000)
=> (name=홍길동:tel, value=3031302d3232322d33333333, timestamp=1485159139953000)
2 Rows Returned.
Elapsed time: 77 msec(s).
PRIMARY KEY ((deptid, empid), empname)
에서 deptid, empid는 복합키
SET
cqlsh:testdb> CREATE TABLE employees3 (
... empid int, deptid int, empname text,
... email text, tel text,
... PRIMARY KEY ((deptid, empid), empname)
... );
cqlsh:testdb> INSERT INTO employees3 (empid, deptid, empname, email, tel) VALUES (10001, 1, '홍길동', 'gdhong@opensg.net', '010-222-3333');
cqlsh:testdb> INSERT INTO employees3 (empid, deptid, empname, email, tel) VALUES (10002, 1, '박문수', 'mspark@opensg.net','010-777-7778');
cqlsh:testdb> ALTER TABLE users ADD phones set<text>;
cqlsh:testdb> UPDATE users SET phones = phones + { '010-1212-3232' } WHERE userid='gdhong';
cqlsh:testdb> UPDATE users SET phones = phones + { '02-3429-5211' } WHERE userid='gdhong';
Request did not complete within rpc_timeout.
cqlsh:testdb> UPDATE users SET phones = phones + { '02-3429-5211' } WHERE userid='gdhong';
cqlsh:testdb>
cqlsh:testdb>
cqlsh:testdb> SELECT userid, phones FROM users;
userid | phones
--------+-----------------------------------
gdhong | {'010-1212-3232', '02-3429-5211'}
(1 rows)
LIST
cqlsh:testdb>
cqlsh:testdb>
cqlsh:testdb> ALTER TABLE users ADD visit_places list<text>;
cqlsh:testdb> UPDATE users SET visit_places = ['스타벅스', '내사무실']
... WHERE userid='gdhong';
cqlsh:testdb> SELECT userid, visit_places FROM users;
userid | visit_places
--------+--------------------------
gdhong | ['스타벅스', '내사무실']
(1 rows)
cqlsh:testdb> UPDATE users SET visit_places = visit_places + ['잠실야구장']
... WHERE userid='gdhong';
cqlsh:testdb> UPDATE users SET visit_places = ['인사동 골목'] + visit_places
... WHERE userid='gdhong';
cqlsh:testdb> SELECT userid, visit_places FROM users;
userid | visit_places
--------+-------------------------------------------------------
gdhong | ['인사동 골목', '스타벅스', '내사무실', '잠실야구장']
(1 rows)
MAP
cqlsh:testdb> drop table users;
cqlsh:testdb> CREATE TABLE users (
... userid varchar,
... username varchar,
... password varchar,
... email varchar,
... PRIMARY KEY (userid)
... );
cqlsh:testdb>
cqlsh:testdb> ALTER TABLE users ADD visit_places map<timestamp, text>;
cqlsh:testdb> INSERT INTO users (userid, username, password, email) VALUES ('gdhong', '홍길동', '1234', 'gdhong@opensg.net');
cqlsh:testdb> UPDATE users SET visit_places={ '2013-08-31 12:12:46':'스타벅스' } WHERE userid='gdhong';
cqlsh:testdb> UPDATE users SET visit_places['2013-09-02 14:15:29'] = '야구장' WHERE userid='gdhong';
cqlsh:testdb> SELECT userid, visit_places FROM users;
userid | visit_places
--------+--------------------------------------------------------------------------------
gdhong | {'2013-08-31 12:12:46+0900': '스타벅스', '2013-09-02 14:15:29+0900': '야구장'}
(1 rows)
---
[default@testdb] list users;
Using default limit of 100
Using default cell limit of 100
-------------------
RowKey: gdhong
=> (name=, value=, timestamp=1485160082176000)
=> (name=email, value=6764686f6e67406f70656e73672e6e6574, timestamp=1485160082176000)
=> (name=password, value=31323334, timestamp=1485160082176000)
=> (name=username, value=ed998deab8b8eb8f99, timestamp=1485160082176000)
index (1) must be less than size (1)
[default@testdb]
GUI
C:\Users\student\Downloads\NOSQL데이터모델링\설치프로그램\클라이언트\cassandra
DevCenter-1.3.1-win-x86_64.zip
HBASE
Hadoop > Zookeeper > HBase
[hadoop@s1 ~]$ ./start-hb.sh
namenode running as process 8679. Stop it first.
s2: datanode running as process 7597. Stop it first.
s4: ssh: connect to host s4 port 22: No route to host
s3: ssh: connect to host s3 port 22: No route to host
s2: secondarynamenode running as process 7704. Stop it first.
jobtracker running as process 8860. Stop it first.
s2: tasktracker running as process 7794. Stop it first.
s4: ssh: connect to host s4 port 22: No route to host
s3: ssh: connect to host s3 port 22: No route to host
JMX enabled by default
Using config: /home/hadoop/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
JMX enabled by default
Using config: /home/hadoop/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
ssh: connect to host s3 port 22: No route to host
ssh: connect to host s4 port 22: No route to host
starting master, logging to /home/hadoop/hbase/bin/../logs/hbase-hadoop-master-s1.test.com.out
s2: starting regionserver, logging to /home/hadoop/hbase/bin/../logs/hbase-hadoop-regionserver-s2.test.com.out
s3: ssh: connect to host s3 port 22: No route to host
s4: ssh: connect to host s4 port 22: No route to host
[hadoop@s1 ~]$
[hadoop@s1 ~]$
[hadoop@s1 ~]$ jps
8860 JobTracker
9380 Jps
8679 NameNode
9287 HMaster
9175 QuorumPeerMain
[hadoop@s1 ~]$
[hadoop@s1 ~]$
[hadoop@s1 ~]$ ssh s2
[hadoop@s2 ~]$ jps
8056 HRegionServer
7997 QuorumPeerMain
8145 Jps
- partial scan 가능
: rowkey가 abc로 시작하는것 scan
like 'abc%'
: cassandra는 불가
필터링기능설명
https://www.cloudera.com/documentation/enterprise/5-5-x/topics/admin_hbase_filtering.html
조회조건은 무조건 rowkey 로 가야한다 (rowkey를 잘정의해야한다)
- 복합키든
create 'orders', 'client', 'product'
put 'orders', 'joe_2013-01-13', 'client:name', 'Joe'
GUI : h-rider
HBASE 낮은 버전만 지뭔
0.94 버전에서는 가능
https://phoenix.apache.org/
http://apache.tt.co.kr/phoenix/
secondary index 설정시 이하 추가돼야함
wal (저널로그)를 에디팅할 수 있어야함
vi conf/hbase-site.xml
<property> <name>hbase.regionserver.wal.codec</name> <value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value> </property>
scp conf/hbase-site.xml s2:~/hbase/conf
scp conf/hbase-site.xml s3:~/hbase/conf
scp conf/hbase-site.xml s4:~/hbase/conf
[hadoop@s1 다운로드]$ tar -xvf phoenix-3.3.1-bin.tar.gz
phoenix-3.3.1-bin/
phoenix-3.3.1-bin/hadoop1/
phoenix-3.3.1-bin/hadoop1/phoenix-core-3.3.1-tests-hadoop1.jar
phoenix-3.3.1-bin/hadoop1/phoenix-flume-3.3.1-tests-hadoop1.jar
phoenix-3.3.1-bin/hadoop1/bin/
phoenix-3.3.1-bin/hadoop1/bin/log4j.properties
phoenix-3.3.1-bin/hadoop1/bin/performance.py
phoenix-3.3.1-bin/hadoop1/bin/psql.py
phoenix-3.3.1-bin/hadoop1/bin/phoenix_sandbox.py
phoenix-3.3.1-bin/hadoop1/bin/sqlline.py
phoenix-3.3.1-bin/hadoop1/bin/end2endTest.py
phoenix-3.3.1-bin/hadoop1/bin/readme.txt
phoenix-3.3.1-bin/hadoop1/bin/sandbox-log4j.properties
phoenix-3.3.1-bin/hadoop1/bin/hbase-site.xml
phoenix-3.3.1-bin/hadoop1/bin/phoenix_utils.py
phoenix-3.3.1-bin/hadoop1/phoenix-pig-3.3.1-tests-hadoop1.jar
phoenix-3.3.1-bin/hadoop1/phoenix-3.3.1-client-hadoop1.jar
phoenix-3.3.1-bin/hadoop1/phoenix-flume-3.3.1-hadoop1.jar
phoenix-3.3.1-bin/hadoop1/phoenix-pig-3.3.1-hadoop1.jar
phoenix-3.3.1-bin/CHANGES
phoenix-3.3.1-bin/common/
phoenix-3.3.1-bin/common/phoenix-3.3.1-client-minimal.jar
phoenix-3.3.1-bin/common/phoenix-core-3.3.1.jar
phoenix-3.3.1-bin/common/phoenix-3.3.1-client-without-hbase.jar
phoenix-3.3.1-bin/hadoop2/
phoenix-3.3.1-bin/hadoop2/phoenix-pig-3.3.1-hadoop2.jar
phoenix-3.3.1-bin/hadoop2/phoenix-pig-3.3.1-tests-hadoop2.jar
phoenix-3.3.1-bin/hadoop2/bin/
phoenix-3.3.1-bin/hadoop2/bin/log4j.properties
phoenix-3.3.1-bin/hadoop2/bin/performance.py
phoenix-3.3.1-bin/hadoop2/bin/psql.py
phoenix-3.3.1-bin/hadoop2/bin/phoenix_sandbox.py
phoenix-3.3.1-bin/hadoop2/bin/sqlline.py
phoenix-3.3.1-bin/hadoop2/bin/end2endTest.py
phoenix-3.3.1-bin/hadoop2/bin/readme.txt
phoenix-3.3.1-bin/hadoop2/bin/sandbox-log4j.properties
phoenix-3.3.1-bin/hadoop2/bin/hbase-site.xml
phoenix-3.3.1-bin/hadoop2/bin/phoenix_utils.py
phoenix-3.3.1-bin/hadoop2/phoenix-core-3.3.1-tests-hadoop2.jar
phoenix-3.3.1-bin/hadoop2/phoenix-flume-3.3.1-hadoop2.jar
phoenix-3.3.1-bin/hadoop2/phoenix-flume-3.3.1-tests-hadoop2.jar
phoenix-3.3.1-bin/hadoop2/phoenix-3.3.1-client-hadoop2.jar
phoenix-3.3.1-bin/README
phoenix-3.3.1-bin/LICENSE
phoenix-3.3.1-bin/NOTICE
phoenix-3.3.1-bin/examples/
phoenix-3.3.1-bin/examples/pig/
phoenix-3.3.1-bin/examples/pig/test.pig
phoenix-3.3.1-bin/examples/pig/testdata
phoenix-3.3.1-bin/examples/WEB_STAT.csv
phoenix-3.3.1-bin/examples/STOCK_SYMBOL.sql
phoenix-3.3.1-bin/examples/WEB_STAT_QUERIES.sql
phoenix-3.3.1-bin/examples/STOCK_SYMBOL.csv
phoenix-3.3.1-bin/examples/WEB_STAT.sql
[hadoop@s1 다운로드]$ ll
합계 168236
-rw-rw-r--. 1 hadoop hadoop 1576 2017-01-23 17:53 aa
-rw-rw-r--. 1 hadoop hadoop 38096663 2015-04-19 20:30 hadoop-1.2.1-bin.tar.gz
-rw-rw-r--. 1 hadoop hadoop 59364077 2015-04-19 20:30 hbase-0.94.27.tar.gz
drwxr-xr-x. 6 hadoop hadoop 4096 2015-04-04 06:59 phoenix-3.3.1-bin
-rw-rw-r--. 1 hadoop hadoop 57087019 2015-04-30 23:22 phoenix-3.3.1-bin.tar.gz
drwxrwxr-x. 2 hadoop hadoop 4096 2017-01-24 10:07 temp
-rw-rw-r--. 1 hadoop hadoop 17699306 2015-04-19 20:34 zookeeper-3.4.6.tar.gz
[hadoop@s1 다운로드]$ mv phoenix-3.3.1-bin ~/phoenix
[hadoop@s1 다운로드]$ cd
[hadoop@s1 ~]$ ll
합계 88
lrwxrwxrwx. 1 hadoop hadoop 13 2015-04-19 20:35 hadoop -> hadoop-1.2.1/
drwxr-xr-x. 16 hadoop hadoop 4096 2015-04-19 21:19 hadoop-1.2.1
lrwxrwxrwx. 1 hadoop hadoop 13 2015-04-19 21:06 hbase -> hbase-0.94.27
drwxr-xr-x. 11 hadoop hadoop 4096 2015-04-19 21:28 hbase-0.94.27
drwxr-xr-x. 6 hadoop hadoop 4096 2015-04-04 06:59 phoenix
-rwxr--r--. 1 hadoop hadoop 238 2015-04-19 22:30 start-hb.sh
-rwxr--r--. 1 hadoop hadoop 231 2015-04-19 22:29 stop-hb.sh
lrwxrwxrwx. 1 hadoop hadoop 16 2015-04-19 21:10 zookeeper -> zookeeper-3.4.6/
drwxr-xr-x. 12 hadoop hadoop 4096 2015-04-19 21:18 zookeeper-3.4.6
-rw-rw-r--. 1 hadoop hadoop 30623 2017-01-24 09:44 zookeeper.out
drwxr-xr-x. 2 hadoop hadoop 4096 2015-03-10 20:54 공개
drwxr-xr-x. 3 hadoop hadoop 4096 2017-01-24 10:25 다운로드
drwxr-xr-x. 2 hadoop hadoop 4096 2015-03-10 20:54 문서
drwxr-xr-x. 2 hadoop hadoop 4096 2015-03-10 20:55 바탕화면
drwxr-xr-x. 2 hadoop hadoop 4096 2015-03-10 20:54 비디오
drwxr-xr-x. 2 hadoop hadoop 4096 2015-03-10 20:54 사진
drwxr-xr-x. 2 hadoop hadoop 4096 2015-03-10 20:54 음악
drwxr-xr-x. 2 hadoop hadoop 4096 2015-03-10 20:54 템플릿
[hadoop@s1 ~]$ cd phoenix/
[hadoop@s1 phoenix]$ ll
합계 76
-rw-r--r--. 1 hadoop hadoop 35004 2015-04-04 06:58 CHANGES
-rw-r--r--. 1 hadoop hadoop 12316 2015-04-04 06:58 LICENSE
-rw-r--r--. 1 hadoop hadoop 2161 2015-04-04 06:58 NOTICE
-rw-r--r--. 1 hadoop hadoop 794 2015-04-04 06:58 README
drwxr-xr-x. 2 hadoop hadoop 4096 2015-04-04 06:58 common
drwxr-xr-x. 3 hadoop hadoop 4096 2015-04-04 06:58 examples
drwxr-xr-x. 3 hadoop hadoop 4096 2015-04-04 06:58 hadoop1
drwxr-xr-x. 3 hadoop hadoop 4096 2015-04-04 06:59 hadoop2
[hadoop@s1 phoenix]$ cd hadoop1
[hadoop@s1 hadoop1]$ cd ..
[hadoop@s1 phoenix]$ ll
합계 76
-rw-r--r--. 1 hadoop hadoop 35004 2015-04-04 06:58 CHANGES
-rw-r--r--. 1 hadoop hadoop 12316 2015-04-04 06:58 LICENSE
-rw-r--r--. 1 hadoop hadoop 2161 2015-04-04 06:58 NOTICE
-rw-r--r--. 1 hadoop hadoop 794 2015-04-04 06:58 README
drwxr-xr-x. 2 hadoop hadoop 4096 2015-04-04 06:58 common
drwxr-xr-x. 3 hadoop hadoop 4096 2015-04-04 06:58 examples
drwxr-xr-x. 3 hadoop hadoop 4096 2015-04-04 06:58 hadoop1
drwxr-xr-x. 3 hadoop hadoop 4096 2015-04-04 06:59 hadoop2
[hadoop@s1 phoenix]$ cp common/phoenix-core-3.3.1.jar ~/hbase/lib
[hadoop@s1 phoenix]$ scp ~/hbase/lib/* s2:~/hbase/lib/
[hadoop@s1 phoenix]$ scp ~/hbase/lib/* s3:~/hbase/lib/
[hadoop@s1 phoenix]$ scp ~/hbase/lib/* s4:~/hbase/lib/
phoenix-core-3.3.1.jar : JDBC
[hadoop@s1 ~]$ ./start-hb.sh
[hadoop@s1 bin]$ pwd
/home/hadoop/phoenix/hadoop1/bin
./psql.py -t WEB_STAT s1 ../../examples/WEB_STAT.sql
./psql.py -t WEB_STAT s1 ../../examples/WEB_STAT.csv
./psql.py -t WEB_STAT s1 ../../examples/STOCK_SYMBOL.sql
./psql.py -t WEB_STAT s1 ../../examples/STOCK_SYMBOL.csv
./sqlline.py s1
HBASE에서는 인덱스가 없으나 피닉스에서 지원함
http://phoenix.apache.org/secondary_indexing.html
CREATE INDEX idx_web_stat ON WEB_STAT (domain);
'dev > NoSQL 모델링' 카테고리의 다른 글
NoSql 모델링 기법 (0) | 2017.01.25 |
---|---|
Shard Cluster (1) | 2017.01.25 |
Replica set (0) | 2017.01.25 |
4. Document Database (0) | 2017.01.24 |
NO SQL을 선택하는 이유 (0) | 2017.01.23 |
TDD 무엇을 테스트 할 것인가?
Decription
As Sam
I want to login into system.
So I can see my own content and so I can receive credit for any transactions I complete
Given a specified user Sam,
When Sam inputs a valid user ID and password
Sam can log in and begin scanning/searching for products to add to cart,
1. 스토리의 첫번째 부분
- As/want/So
- 이 스토리를 통해서 사용자가 어떤 가치를 얻을 수 있는지 기술
2. 스토리의 두번째 부분
- Given/When/Then
- Acceptance criteria : Product Manager가 이 스토리의 완료여부를 판단할 기준
※ 비교
ATDD : Acceptance Test Driven Development
BDD : Behavior Driven Development
"제대로된 TDD를 하기 위하서는 테스트케이스는..."
① 지금까지 처리한 스토리들을 온전히 기술
- 수동으로 다시 앱을 실행하지 않고도 지금까지 구현된 기능들이 망가지지않고 잘 실행됨을 확신
② 구현의 변경에 강해야함
- 단위테스트가 구현에 밀접하게 연관되어 조밀하게 작성돼있다면, 리팩토링 시도시 관련 모든 테스트코드를 수정해야함
- 높은 레벨에서 테스트케이스를 작성하면 세부적인 구현 변경시 테스트코드 변경노력을 줄임
위 두가지 장점은 개발의 효율성으로 이어짐
[TDD Cycle]
가장 단순한 코드를 작성해야함
지금 작성하고 있는 스토리에 집중
예) 스토리 : "사용자는 적용할 수 있는 할인의 종류를 볼 수 있다" 라고 하면
- 할인의 종류를 보여주는 것으로 충분함
- 할인의 종류를 하드코딩해서 보여주는 것이 가장 훌륭한 구현
- 이후, 할인의 종류가 변경된다면 역시 또다른 스토리로 작성되고
- 해당 스토리 처리시 할인의 종류를 DB에서 관리할지, 하드코딩된 값을 바꾸는 것으로 충분할지 Product Manager와
논의를 진행해도 충분함
이런 방식의 가장 큰 장점은 빠른 개발
TDD로 개발하면 개발속도가 느릴것 이라는 오해와 달리 이런원칙을 지킨다면 정말 빠른 속도로 개발할 수 있음
"개발자의 가정에 의해 일하지 않는다"
할인의 종류가 변경될 것이다라는 것은 PM에게 확인받기전까지는 개발자의 가정일뿐.
만반의 준비를 갖춰놓아도 Price Error / Price Match 두가지 할인만을 지원하면 되는 시스템이라면 그간의 노력은
리소스를 낭비한 것 일 수 밖에 없음
PM의 대답은 대개 "지금은 괜찮다. 이후에 할인종류 변경관련 스토리가 추가될 것이다" 일 가능성이 높음
가정에 일하지 않는다는 원칙은 모든 부분에 적용됨
가정을 없애기 위해 개발자들은 PM, UX디자이너와 끊임없이 이야기해야함.
https://monkeyisland.pl/2009/12/07/given-when-then-forever/
//given //when //then forever
@Test
public void shouldDoSomethingCool() throws Exception {
//given
//when
//then
}
[참고]
BDD 툴 : Mockito
Cucumber
'dev' 카테고리의 다른 글
nexus ubunto (0) | 2017.03.27 |
---|---|
ISDP (0) | 2017.02.02 |
Anti-OOP: if를 피하는 법 (0) | 2017.02.01 |
XML vs JSON (0) | 2017.01.26 |
윈도우10 FTP서버 설치후 아웃바운드 방화벽 문제 (0) | 2017.01.02 |
2. Key-Value Database
Redis
table 개념X
Redis
- lpush -> rpop : queue
- in-memory cache 로 많이 씀
ex)
lpush gdhong:test "a" "b" "c" "d"
s1:6379> brpop gdhong:test 0
1) "gdhong:test"
2) "a"
(4.09s)
s1:6379> brpop gdhong:test 0
1) "gdhong:test"
2) "b"